Jan 26 14:46:42 crc systemd[1]: Starting Kubernetes Kubelet... Jan 26 14:46:42 crc restorecon[4754]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:42 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:46:43 crc restorecon[4754]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 26 14:46:43 crc restorecon[4754]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 26 14:46:43 crc kubenswrapper[4823]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 14:46:43 crc kubenswrapper[4823]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 26 14:46:43 crc kubenswrapper[4823]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 14:46:43 crc kubenswrapper[4823]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 14:46:43 crc kubenswrapper[4823]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 26 14:46:43 crc kubenswrapper[4823]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.420559 4823 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423195 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423215 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423222 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423227 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423232 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423237 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423242 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423249 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423255 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423260 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423265 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423270 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423274 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423279 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423284 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423290 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423296 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423301 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423306 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423311 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423316 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423327 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423333 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423338 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423342 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423347 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423352 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423356 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423378 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423383 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423388 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423392 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423397 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423401 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423406 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423410 4823 feature_gate.go:330] unrecognized feature gate: Example Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423415 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423419 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423425 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423430 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423435 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423440 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423444 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423448 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423454 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423459 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423464 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423469 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423474 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423481 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423486 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423491 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423496 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423500 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423505 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423510 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423517 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423521 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423526 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423531 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423538 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423543 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423547 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423551 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423555 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423561 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423566 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423571 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423575 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423580 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.423585 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423704 4823 flags.go:64] FLAG: --address="0.0.0.0" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423728 4823 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423746 4823 flags.go:64] FLAG: --anonymous-auth="true" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423752 4823 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423758 4823 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423763 4823 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423770 4823 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423776 4823 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423781 4823 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423786 4823 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423805 4823 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423811 4823 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423816 4823 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423821 4823 flags.go:64] FLAG: --cgroup-root="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423826 4823 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423831 4823 flags.go:64] FLAG: --client-ca-file="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423836 4823 flags.go:64] FLAG: --cloud-config="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423841 4823 flags.go:64] FLAG: --cloud-provider="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423846 4823 flags.go:64] FLAG: --cluster-dns="[]" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423857 4823 flags.go:64] FLAG: --cluster-domain="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423862 4823 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423868 4823 flags.go:64] FLAG: --config-dir="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423873 4823 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423879 4823 flags.go:64] FLAG: --container-log-max-files="5" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423887 4823 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423893 4823 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423898 4823 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423904 4823 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423911 4823 flags.go:64] FLAG: --contention-profiling="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423916 4823 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423921 4823 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423926 4823 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423931 4823 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423938 4823 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423944 4823 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423949 4823 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423953 4823 flags.go:64] FLAG: --enable-load-reader="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423965 4823 flags.go:64] FLAG: --enable-server="true" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423971 4823 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423978 4823 flags.go:64] FLAG: --event-burst="100" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423983 4823 flags.go:64] FLAG: --event-qps="50" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423988 4823 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423993 4823 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.423998 4823 flags.go:64] FLAG: --eviction-hard="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424005 4823 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424010 4823 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424015 4823 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424020 4823 flags.go:64] FLAG: --eviction-soft="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424025 4823 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424030 4823 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424036 4823 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424042 4823 flags.go:64] FLAG: --experimental-mounter-path="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424047 4823 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424052 4823 flags.go:64] FLAG: --fail-swap-on="true" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424057 4823 flags.go:64] FLAG: --feature-gates="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424063 4823 flags.go:64] FLAG: --file-check-frequency="20s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424069 4823 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424074 4823 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424079 4823 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424084 4823 flags.go:64] FLAG: --healthz-port="10248" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424090 4823 flags.go:64] FLAG: --help="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424095 4823 flags.go:64] FLAG: --hostname-override="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424100 4823 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424104 4823 flags.go:64] FLAG: --http-check-frequency="20s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424109 4823 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424115 4823 flags.go:64] FLAG: --image-credential-provider-config="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424120 4823 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424125 4823 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424130 4823 flags.go:64] FLAG: --image-service-endpoint="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424135 4823 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424140 4823 flags.go:64] FLAG: --kube-api-burst="100" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424145 4823 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424150 4823 flags.go:64] FLAG: --kube-api-qps="50" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424156 4823 flags.go:64] FLAG: --kube-reserved="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424161 4823 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424166 4823 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424172 4823 flags.go:64] FLAG: --kubelet-cgroups="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424177 4823 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424182 4823 flags.go:64] FLAG: --lock-file="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424187 4823 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424192 4823 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424197 4823 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424206 4823 flags.go:64] FLAG: --log-json-split-stream="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424211 4823 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424216 4823 flags.go:64] FLAG: --log-text-split-stream="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424221 4823 flags.go:64] FLAG: --logging-format="text" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424226 4823 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424232 4823 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424237 4823 flags.go:64] FLAG: --manifest-url="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424242 4823 flags.go:64] FLAG: --manifest-url-header="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424249 4823 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424254 4823 flags.go:64] FLAG: --max-open-files="1000000" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424260 4823 flags.go:64] FLAG: --max-pods="110" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424265 4823 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424271 4823 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424276 4823 flags.go:64] FLAG: --memory-manager-policy="None" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424281 4823 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424286 4823 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424291 4823 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424296 4823 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424308 4823 flags.go:64] FLAG: --node-status-max-images="50" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424313 4823 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424318 4823 flags.go:64] FLAG: --oom-score-adj="-999" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424323 4823 flags.go:64] FLAG: --pod-cidr="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424328 4823 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424338 4823 flags.go:64] FLAG: --pod-manifest-path="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424342 4823 flags.go:64] FLAG: --pod-max-pids="-1" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424347 4823 flags.go:64] FLAG: --pods-per-core="0" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424353 4823 flags.go:64] FLAG: --port="10250" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424380 4823 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424387 4823 flags.go:64] FLAG: --provider-id="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424393 4823 flags.go:64] FLAG: --qos-reserved="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424398 4823 flags.go:64] FLAG: --read-only-port="10255" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424405 4823 flags.go:64] FLAG: --register-node="true" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424411 4823 flags.go:64] FLAG: --register-schedulable="true" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424417 4823 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424426 4823 flags.go:64] FLAG: --registry-burst="10" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424431 4823 flags.go:64] FLAG: --registry-qps="5" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424437 4823 flags.go:64] FLAG: --reserved-cpus="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424442 4823 flags.go:64] FLAG: --reserved-memory="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424449 4823 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424454 4823 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424460 4823 flags.go:64] FLAG: --rotate-certificates="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424465 4823 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424470 4823 flags.go:64] FLAG: --runonce="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424475 4823 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424480 4823 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424485 4823 flags.go:64] FLAG: --seccomp-default="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424490 4823 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424495 4823 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424500 4823 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424505 4823 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424510 4823 flags.go:64] FLAG: --storage-driver-password="root" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424515 4823 flags.go:64] FLAG: --storage-driver-secure="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424520 4823 flags.go:64] FLAG: --storage-driver-table="stats" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424525 4823 flags.go:64] FLAG: --storage-driver-user="root" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424530 4823 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424537 4823 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424542 4823 flags.go:64] FLAG: --system-cgroups="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424547 4823 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424556 4823 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424561 4823 flags.go:64] FLAG: --tls-cert-file="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424566 4823 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424572 4823 flags.go:64] FLAG: --tls-min-version="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424577 4823 flags.go:64] FLAG: --tls-private-key-file="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424590 4823 flags.go:64] FLAG: --topology-manager-policy="none" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424603 4823 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424608 4823 flags.go:64] FLAG: --topology-manager-scope="container" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424613 4823 flags.go:64] FLAG: --v="2" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424620 4823 flags.go:64] FLAG: --version="false" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424627 4823 flags.go:64] FLAG: --vmodule="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424633 4823 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.424639 4823 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424775 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424782 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424787 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424792 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424796 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424801 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424807 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424813 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424819 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424824 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424828 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424833 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424838 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424842 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424847 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424851 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424855 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424859 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424863 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424867 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424871 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424876 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424882 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424887 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424892 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424899 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424903 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424908 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424913 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424919 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424923 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424927 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424931 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424935 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424938 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424942 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424946 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424950 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424953 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424957 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424960 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424964 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424967 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424971 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424975 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424979 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424984 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424988 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424991 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424995 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.424999 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425003 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425007 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425012 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425016 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425019 4823 feature_gate.go:330] unrecognized feature gate: Example Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425023 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425028 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425033 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425036 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425040 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425043 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425047 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425050 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425055 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425058 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425062 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425065 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425068 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425072 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.425075 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.425086 4823 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.431454 4823 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.431632 4823 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431714 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431723 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431728 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431733 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431737 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431741 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431745 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431748 4823 feature_gate.go:330] unrecognized feature gate: Example Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431752 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431756 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431759 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431763 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431766 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431770 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431773 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431777 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431780 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431784 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431788 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431793 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431797 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431802 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431806 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431810 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431813 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431817 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431821 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431824 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431828 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431832 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431835 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431839 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431842 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431847 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431852 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431856 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431861 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431867 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431872 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431875 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431879 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431882 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431886 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431889 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431893 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431896 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431900 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431905 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431909 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431913 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431917 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431921 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431925 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431929 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431933 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431938 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431942 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431945 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431949 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431953 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431956 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431959 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431963 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431966 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431970 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431973 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431977 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431980 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431984 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431987 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.431992 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.431998 4823 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432103 4823 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432109 4823 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432113 4823 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432117 4823 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432120 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432124 4823 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432127 4823 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432131 4823 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432136 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432140 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432143 4823 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432147 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432151 4823 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432154 4823 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432158 4823 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432162 4823 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432166 4823 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432169 4823 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432173 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432176 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432180 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432183 4823 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432187 4823 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432190 4823 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432194 4823 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432197 4823 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432200 4823 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432204 4823 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432207 4823 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432211 4823 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432214 4823 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432218 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432223 4823 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432227 4823 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432233 4823 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432238 4823 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432243 4823 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432248 4823 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432252 4823 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432256 4823 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432260 4823 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432265 4823 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432270 4823 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432274 4823 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432278 4823 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432283 4823 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432287 4823 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432291 4823 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432298 4823 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432303 4823 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432309 4823 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432314 4823 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432320 4823 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432325 4823 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432330 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432335 4823 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432340 4823 feature_gate.go:330] unrecognized feature gate: Example Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432344 4823 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432349 4823 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432354 4823 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432359 4823 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432383 4823 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432389 4823 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432395 4823 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432399 4823 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432404 4823 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432409 4823 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432414 4823 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432418 4823 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432423 4823 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.432429 4823 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.432436 4823 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.437639 4823 server.go:940] "Client rotation is on, will bootstrap in background" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.440343 4823 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.440477 4823 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.440973 4823 server.go:997] "Starting client certificate rotation" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.440997 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.441119 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-31 13:14:10.818584661 +0000 UTC Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.441212 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.446549 4823 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.455924 4823 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.459491 4823 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.465349 4823 log.go:25] "Validated CRI v1 runtime API" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.478110 4823 log.go:25] "Validated CRI v1 image API" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.479410 4823 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.481780 4823 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-26-14-41-45-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.481817 4823 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.502413 4823 manager.go:217] Machine: {Timestamp:2026-01-26 14:46:43.50093266 +0000 UTC m=+0.186395805 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:06121041-f4b9-4887-a160-aaea37857ce6 BootID:3ceea7b9-d10c-45de-8939-0873f2d979e6 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:98:16:a0 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:98:16:a0 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:4d:00:f8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:e2:13:9b Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:5d:06:32 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a8:e6:cc Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:98:dd:2f Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3e:dc:4b:9e:8b:8e Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:12:a2:26:4a:f5:7f Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.502687 4823 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.503027 4823 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.503838 4823 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.504064 4823 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.504108 4823 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.504348 4823 topology_manager.go:138] "Creating topology manager with none policy" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.504384 4823 container_manager_linux.go:303] "Creating device plugin manager" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.504600 4823 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.504661 4823 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.504870 4823 state_mem.go:36] "Initialized new in-memory state store" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.504978 4823 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.505797 4823 kubelet.go:418] "Attempting to sync node with API server" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.505855 4823 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.505897 4823 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.505918 4823 kubelet.go:324] "Adding apiserver pod source" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.506145 4823 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.508568 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.508572 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.508642 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.508675 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.508982 4823 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.509434 4823 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.510158 4823 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.511966 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.511995 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.512017 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.512028 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.512044 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.512053 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.512062 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.512084 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.512093 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.512102 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.512114 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.512123 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.512422 4823 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.513317 4823 server.go:1280] "Started kubelet" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.513509 4823 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.513605 4823 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.514341 4823 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 26 14:46:43 crc systemd[1]: Started Kubernetes Kubelet. Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.515868 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.516739 4823 server.go:460] "Adding debug handlers to kubelet server" Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.516666 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188e4f378732b590 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 14:46:43.51322664 +0000 UTC m=+0.198689745,LastTimestamp:2026-01-26 14:46:43.51322664 +0000 UTC m=+0.198689745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.517276 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.517399 4823 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.517434 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 02:50:33.592636242 +0000 UTC Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.517623 4823 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.517635 4823 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.517628 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.517831 4823 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.517995 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="200ms" Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.518225 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.518282 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.520833 4823 factory.go:55] Registering systemd factory Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.521208 4823 factory.go:221] Registration of the systemd container factory successfully Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.522569 4823 factory.go:153] Registering CRI-O factory Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.522599 4823 factory.go:221] Registration of the crio container factory successfully Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.522683 4823 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.522708 4823 factory.go:103] Registering Raw factory Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.522722 4823 manager.go:1196] Started watching for new ooms in manager Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.523451 4823 manager.go:319] Starting recovery of all containers Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528262 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528328 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528348 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528388 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528404 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528417 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528437 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528454 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528479 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528493 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528512 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528527 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528571 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528597 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528618 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528632 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528645 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528662 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528677 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528699 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528717 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528734 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528753 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528769 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528786 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528808 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528847 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528864 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528883 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528897 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528918 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528939 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528954 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528974 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.528987 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529008 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529023 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529038 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529056 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529073 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529090 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529111 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529125 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529141 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529157 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529171 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529190 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529205 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529221 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529234 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529283 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529299 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529398 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529435 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529460 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529479 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529496 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529515 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529528 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529555 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529568 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529582 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529669 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529747 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529765 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529786 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529822 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529931 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529949 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529963 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.529981 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530156 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530199 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530237 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530251 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530264 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530281 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530296 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530312 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530324 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530337 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530386 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530400 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530416 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530429 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530446 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530464 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530477 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530495 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530508 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530522 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530538 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530552 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530569 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530581 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530594 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530614 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530628 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530641 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530657 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530670 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530686 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530698 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530735 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530879 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530959 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.530981 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531032 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531052 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531072 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531086 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531106 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531121 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531160 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531180 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531194 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531241 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531280 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531293 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531307 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531325 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531356 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.531399 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532112 4823 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532178 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532195 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532216 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532229 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532243 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532329 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532344 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532379 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532395 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532485 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532556 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532632 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532679 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532717 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532732 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532750 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532793 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532811 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532877 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532942 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.532964 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533014 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533030 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533043 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533057 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533102 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533116 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533134 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533147 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533198 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533217 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533231 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533245 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533264 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533277 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533319 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533332 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533411 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533471 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533491 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533508 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533546 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533563 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533620 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533633 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533732 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533749 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533773 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533810 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533875 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533969 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.533984 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534000 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534059 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534076 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534222 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534240 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534317 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534330 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534342 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534380 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534426 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534450 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534464 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534476 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534494 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534556 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534606 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534621 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534671 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534690 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534703 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534717 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534762 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534775 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534793 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534806 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534852 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534950 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.534965 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.535048 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.535061 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.535073 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.535089 4823 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.535101 4823 reconstruct.go:97] "Volume reconstruction finished" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.535110 4823 reconciler.go:26] "Reconciler: start to sync state" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.541715 4823 manager.go:324] Recovery completed Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.549442 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.550855 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.550888 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.550897 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.553832 4823 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.553863 4823 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.553884 4823 state_mem.go:36] "Initialized new in-memory state store" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.557341 4823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.558977 4823 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.559010 4823 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.559033 4823 kubelet.go:2335] "Starting kubelet main sync loop" Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.559073 4823 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 26 14:46:43 crc kubenswrapper[4823]: W0126 14:46:43.560411 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.560498 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.617786 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.659791 4823 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.718042 4823 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.718805 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="400ms" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.753097 4823 policy_none.go:49] "None policy: Start" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.754262 4823 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.754321 4823 state_mem.go:35] "Initializing new in-memory state store" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.806581 4823 manager.go:334] "Starting Device Plugin manager" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.806855 4823 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.806925 4823 server.go:79] "Starting device plugin registration server" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.807448 4823 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.807527 4823 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.808180 4823 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.808351 4823 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.808441 4823 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.813639 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.860151 4823 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.860486 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.861798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.861836 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.861844 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.861998 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.862103 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.862133 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.863283 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.863380 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.863460 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.863321 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.863571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.863583 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.863856 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.864113 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.864262 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.864828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.864847 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.864856 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.864953 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.865189 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.865291 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.865560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.865620 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.865635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.865791 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.865814 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.865867 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.866051 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.866231 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.866286 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.867272 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.867310 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.867319 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.867793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.867813 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.867821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.867892 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.867907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.867934 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.868172 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.868204 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.869251 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.869276 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.869284 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.907701 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.908814 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.908840 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.908848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.908875 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:46:43 crc kubenswrapper[4823]: E0126 14:46:43.909654 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940570 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940614 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940640 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940655 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940674 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940688 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940703 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940731 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940748 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940763 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940779 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940794 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940809 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940822 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:46:43 crc kubenswrapper[4823]: I0126 14:46:43.940838 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.042560 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.042413 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.043331 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.043582 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.043700 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.043874 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044086 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044117 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044144 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044165 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044183 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044205 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044225 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044243 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044263 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044285 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044303 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044318 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044752 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044798 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.044865 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.045047 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.045113 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.045152 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.045155 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.045185 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.045197 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.045234 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.045217 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.045286 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.109851 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.111034 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.111066 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.111074 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.111094 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:46:44 crc kubenswrapper[4823]: E0126 14:46:44.111597 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Jan 26 14:46:44 crc kubenswrapper[4823]: E0126 14:46:44.119415 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="800ms" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.200463 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.207393 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: W0126 14:46:44.223408 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-65918be13a11261da024d23447d71aa268d501d3c33d34501afaf8f8ca4f15f5 WatchSource:0}: Error finding container 65918be13a11261da024d23447d71aa268d501d3c33d34501afaf8f8ca4f15f5: Status 404 returned error can't find the container with id 65918be13a11261da024d23447d71aa268d501d3c33d34501afaf8f8ca4f15f5 Jan 26 14:46:44 crc kubenswrapper[4823]: W0126 14:46:44.226652 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-08a63dc2dd3b8efcf73ed0d57d0f4e7d541f32f311dbb35074d01744b1a10efa WatchSource:0}: Error finding container 08a63dc2dd3b8efcf73ed0d57d0f4e7d541f32f311dbb35074d01744b1a10efa: Status 404 returned error can't find the container with id 08a63dc2dd3b8efcf73ed0d57d0f4e7d541f32f311dbb35074d01744b1a10efa Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.231866 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.251441 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.256657 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:46:44 crc kubenswrapper[4823]: W0126 14:46:44.355743 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-6e67b48cf99ffea0e0b377bb64880f15d3a4c4854ad6aa09670351181ddfacc6 WatchSource:0}: Error finding container 6e67b48cf99ffea0e0b377bb64880f15d3a4c4854ad6aa09670351181ddfacc6: Status 404 returned error can't find the container with id 6e67b48cf99ffea0e0b377bb64880f15d3a4c4854ad6aa09670351181ddfacc6 Jan 26 14:46:44 crc kubenswrapper[4823]: W0126 14:46:44.358907 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7ff30fc3d28d7289518a5e4253caafa0588ac0a3a854e0af51ba409e7734b563 WatchSource:0}: Error finding container 7ff30fc3d28d7289518a5e4253caafa0588ac0a3a854e0af51ba409e7734b563: Status 404 returned error can't find the container with id 7ff30fc3d28d7289518a5e4253caafa0588ac0a3a854e0af51ba409e7734b563 Jan 26 14:46:44 crc kubenswrapper[4823]: W0126 14:46:44.364212 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-ed18741ac681b5d61de1e69248af07937fd39ba4ea566eaac294a1d92c9b57da WatchSource:0}: Error finding container ed18741ac681b5d61de1e69248af07937fd39ba4ea566eaac294a1d92c9b57da: Status 404 returned error can't find the container with id ed18741ac681b5d61de1e69248af07937fd39ba4ea566eaac294a1d92c9b57da Jan 26 14:46:44 crc kubenswrapper[4823]: W0126 14:46:44.470151 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 26 14:46:44 crc kubenswrapper[4823]: E0126 14:46:44.470234 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.512555 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.514030 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.514069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.514079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.514104 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:46:44 crc kubenswrapper[4823]: E0126 14:46:44.514649 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.517263 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.518322 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:25:25.444174545 +0000 UTC Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.562811 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ed18741ac681b5d61de1e69248af07937fd39ba4ea566eaac294a1d92c9b57da"} Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.564335 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7ff30fc3d28d7289518a5e4253caafa0588ac0a3a854e0af51ba409e7734b563"} Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.565389 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6e67b48cf99ffea0e0b377bb64880f15d3a4c4854ad6aa09670351181ddfacc6"} Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.566158 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"65918be13a11261da024d23447d71aa268d501d3c33d34501afaf8f8ca4f15f5"} Jan 26 14:46:44 crc kubenswrapper[4823]: I0126 14:46:44.567052 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"08a63dc2dd3b8efcf73ed0d57d0f4e7d541f32f311dbb35074d01744b1a10efa"} Jan 26 14:46:44 crc kubenswrapper[4823]: E0126 14:46:44.920503 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="1.6s" Jan 26 14:46:44 crc kubenswrapper[4823]: W0126 14:46:44.953803 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 26 14:46:44 crc kubenswrapper[4823]: E0126 14:46:44.953899 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:46:44 crc kubenswrapper[4823]: W0126 14:46:44.962358 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 26 14:46:44 crc kubenswrapper[4823]: E0126 14:46:44.962427 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:46:45 crc kubenswrapper[4823]: W0126 14:46:45.078943 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 26 14:46:45 crc kubenswrapper[4823]: E0126 14:46:45.079051 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.315327 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.316691 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.316992 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.317014 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.317052 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:46:45 crc kubenswrapper[4823]: E0126 14:46:45.318125 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.106:6443: connect: connection refused" node="crc" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.516637 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.106:6443: connect: connection refused Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.518746 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:31:08.580224957 +0000 UTC Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.571623 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f" exitCode=0 Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.571694 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f"} Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.571812 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.572693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.572722 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.572732 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.574817 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.575842 4823 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00" exitCode=0 Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.575881 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00"} Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.575993 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.576665 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.576711 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.576726 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.577456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.577493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.577504 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.579401 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7"} Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.579431 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c"} Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.581710 4823 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb" exitCode=0 Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.581830 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.582181 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb"} Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.582701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.582722 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.582733 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.592063 4823 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="81186c6b1f3e7c7ff1176b15202f37dde0bc7de0a7c98f81b86deaf45e209823" exitCode=0 Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.592132 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"81186c6b1f3e7c7ff1176b15202f37dde0bc7de0a7c98f81b86deaf45e209823"} Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.592310 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.593407 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.593435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.593447 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:45 crc kubenswrapper[4823]: I0126 14:46:45.645322 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 14:46:45 crc kubenswrapper[4823]: E0126 14:46:45.647004 4823 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.106:6443: connect: connection refused" logger="UnhandledError" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.519611 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:59:57.531619896 +0000 UTC Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.595882 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e"} Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.595927 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.595935 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae"} Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.596713 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.596742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.596756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.597096 4823 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718" exitCode=0 Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.597149 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718"} Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.597243 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.597808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.597833 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.597843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.598399 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"13aa7fa5aa898d3825c5adb254ec7ce99a4f0623492d4c460a00d10323e85756"} Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.598472 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.599216 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.599230 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.599237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.607319 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425"} Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.607340 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee"} Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.607352 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3"} Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.609196 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"33b8287c9ef9bff38b70708b6eda84178a20a6aee826e525cdfc3801b2f6989e"} Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.609216 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e79cc4ad8ead9c66236318765f5821d5ee24b59dab1a756ef436e85ad48cae99"} Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.609224 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a4c752c29188accd5e4152c1c3960ab7b9ca76ad3636d24fd4fdca356e6c0d4b"} Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.609277 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.609923 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.609942 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.609949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.918207 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.919647 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.919694 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.919713 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:46 crc kubenswrapper[4823]: I0126 14:46:46.919744 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.520465 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:17:40.204265529 +0000 UTC Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.614158 4823 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79" exitCode=0 Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.614264 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79"} Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.614280 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.615224 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.615249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.615256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.620212 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc"} Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.620252 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758"} Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.620294 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.620341 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.620380 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.620439 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.620468 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.621718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.621739 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.621756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.621765 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.621769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.621781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.621770 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.621795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.621806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.621743 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.621847 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:47 crc kubenswrapper[4823]: I0126 14:46:47.621860 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.521159 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 15:00:51.087969232 +0000 UTC Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.626713 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.627130 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae"} Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.627167 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b"} Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.627177 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f"} Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.627185 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733"} Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.627238 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.627259 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.627438 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.627463 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.627472 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.628052 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.628076 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.628084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:48 crc kubenswrapper[4823]: I0126 14:46:48.983399 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:49 crc kubenswrapper[4823]: I0126 14:46:49.475226 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:49 crc kubenswrapper[4823]: I0126 14:46:49.521449 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 07:03:58.38862722 +0000 UTC Jan 26 14:46:49 crc kubenswrapper[4823]: I0126 14:46:49.637522 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:49 crc kubenswrapper[4823]: I0126 14:46:49.638139 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:49 crc kubenswrapper[4823]: I0126 14:46:49.638098 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f"} Jan 26 14:46:49 crc kubenswrapper[4823]: I0126 14:46:49.638873 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:49 crc kubenswrapper[4823]: I0126 14:46:49.638928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:49 crc kubenswrapper[4823]: I0126 14:46:49.638944 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:49 crc kubenswrapper[4823]: I0126 14:46:49.639268 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:49 crc kubenswrapper[4823]: I0126 14:46:49.639317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:49 crc kubenswrapper[4823]: I0126 14:46:49.639331 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:49 crc kubenswrapper[4823]: I0126 14:46:49.953171 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 26 14:46:50 crc kubenswrapper[4823]: I0126 14:46:50.521972 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 19:58:41.444141934 +0000 UTC Jan 26 14:46:50 crc kubenswrapper[4823]: I0126 14:46:50.640355 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:50 crc kubenswrapper[4823]: I0126 14:46:50.640520 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:50 crc kubenswrapper[4823]: I0126 14:46:50.641515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:50 crc kubenswrapper[4823]: I0126 14:46:50.641554 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:50 crc kubenswrapper[4823]: I0126 14:46:50.641568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:50 crc kubenswrapper[4823]: I0126 14:46:50.641715 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:50 crc kubenswrapper[4823]: I0126 14:46:50.641776 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:50 crc kubenswrapper[4823]: I0126 14:46:50.641805 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:51 crc kubenswrapper[4823]: I0126 14:46:51.522806 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 22:56:34.652651851 +0000 UTC Jan 26 14:46:52 crc kubenswrapper[4823]: I0126 14:46:52.523466 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 06:08:28.291781392 +0000 UTC Jan 26 14:46:52 crc kubenswrapper[4823]: I0126 14:46:52.607432 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:46:52 crc kubenswrapper[4823]: I0126 14:46:52.607617 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:52 crc kubenswrapper[4823]: I0126 14:46:52.608840 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:52 crc kubenswrapper[4823]: I0126 14:46:52.608888 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:52 crc kubenswrapper[4823]: I0126 14:46:52.608901 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:53 crc kubenswrapper[4823]: I0126 14:46:53.523711 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 22:32:32.741759211 +0000 UTC Jan 26 14:46:53 crc kubenswrapper[4823]: I0126 14:46:53.581777 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:53 crc kubenswrapper[4823]: I0126 14:46:53.582174 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:53 crc kubenswrapper[4823]: I0126 14:46:53.583930 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:53 crc kubenswrapper[4823]: I0126 14:46:53.583979 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:53 crc kubenswrapper[4823]: I0126 14:46:53.583991 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:53 crc kubenswrapper[4823]: E0126 14:46:53.813769 4823 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.427183 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.427326 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.428524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.428552 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.428560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.455349 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.459886 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.509991 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.510197 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.511332 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.511387 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.511398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.524384 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:47:23.076245396 +0000 UTC Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.652158 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.652251 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.653010 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.653047 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:54 crc kubenswrapper[4823]: I0126 14:46:54.653058 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:55 crc kubenswrapper[4823]: I0126 14:46:55.525066 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 19:19:27.271643161 +0000 UTC Jan 26 14:46:55 crc kubenswrapper[4823]: I0126 14:46:55.654819 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:55 crc kubenswrapper[4823]: I0126 14:46:55.655836 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:55 crc kubenswrapper[4823]: I0126 14:46:55.655871 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:55 crc kubenswrapper[4823]: I0126 14:46:55.655882 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:55 crc kubenswrapper[4823]: I0126 14:46:55.662408 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:46:56 crc kubenswrapper[4823]: I0126 14:46:56.518690 4823 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 26 14:46:56 crc kubenswrapper[4823]: E0126 14:46:56.521963 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 26 14:46:56 crc kubenswrapper[4823]: I0126 14:46:56.526103 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 07:55:13.328009535 +0000 UTC Jan 26 14:46:56 crc kubenswrapper[4823]: I0126 14:46:56.581856 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 14:46:56 crc kubenswrapper[4823]: I0126 14:46:56.581958 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:46:56 crc kubenswrapper[4823]: I0126 14:46:56.656928 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:56 crc kubenswrapper[4823]: I0126 14:46:56.657836 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:56 crc kubenswrapper[4823]: I0126 14:46:56.657878 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:56 crc kubenswrapper[4823]: I0126 14:46:56.657891 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:56 crc kubenswrapper[4823]: E0126 14:46:56.921072 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 26 14:46:57 crc kubenswrapper[4823]: W0126 14:46:57.016565 4823 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.016663 4823 trace.go:236] Trace[239556438]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 14:46:47.015) (total time: 10001ms): Jan 26 14:46:57 crc kubenswrapper[4823]: Trace[239556438]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:46:57.016) Jan 26 14:46:57 crc kubenswrapper[4823]: Trace[239556438]: [10.001606453s] [10.001606453s] END Jan 26 14:46:57 crc kubenswrapper[4823]: E0126 14:46:57.016709 4823 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.045698 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]","reason":"Forbidden","details":{},"code":403} Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.045760 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.181949 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.182149 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.183247 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.183275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.183287 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.514931 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.514990 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.526468 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 15:52:06.125458733 +0000 UTC Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.612076 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]log ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]etcd ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/priority-and-fairness-filter ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/start-apiextensions-informers ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/start-apiextensions-controllers ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/crd-informer-synced ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/start-system-namespaces-controller ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 26 14:46:57 crc kubenswrapper[4823]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 26 14:46:57 crc kubenswrapper[4823]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/bootstrap-controller ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/start-kube-aggregator-informers ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/apiservice-registration-controller ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/apiservice-discovery-controller ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]autoregister-completion ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/apiservice-openapi-controller ok Jan 26 14:46:57 crc kubenswrapper[4823]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 26 14:46:57 crc kubenswrapper[4823]: livez check failed Jan 26 14:46:57 crc kubenswrapper[4823]: I0126 14:46:57.612132 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:46:58 crc kubenswrapper[4823]: I0126 14:46:58.526645 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:39:41.860073343 +0000 UTC Jan 26 14:46:59 crc kubenswrapper[4823]: I0126 14:46:59.476388 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 14:46:59 crc kubenswrapper[4823]: I0126 14:46:59.476452 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 14:46:59 crc kubenswrapper[4823]: I0126 14:46:59.527601 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:08:03.494677848 +0000 UTC Jan 26 14:47:00 crc kubenswrapper[4823]: I0126 14:47:00.121826 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:47:00 crc kubenswrapper[4823]: I0126 14:47:00.123265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:00 crc kubenswrapper[4823]: I0126 14:47:00.123329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:00 crc kubenswrapper[4823]: I0126 14:47:00.123339 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:00 crc kubenswrapper[4823]: I0126 14:47:00.123377 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:47:00 crc kubenswrapper[4823]: E0126 14:47:00.129444 4823 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 26 14:47:00 crc kubenswrapper[4823]: I0126 14:47:00.528438 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:04:41.721626541 +0000 UTC Jan 26 14:47:01 crc kubenswrapper[4823]: I0126 14:47:01.528733 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 08:07:46.693021019 +0000 UTC Jan 26 14:47:01 crc kubenswrapper[4823]: I0126 14:47:01.888707 4823 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.046418 4823 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.051060 4823 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.052192 4823 trace.go:236] Trace[918831767]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 14:46:47.591) (total time: 14460ms): Jan 26 14:47:02 crc kubenswrapper[4823]: Trace[918831767]: ---"Objects listed" error: 14460ms (14:47:02.052) Jan 26 14:47:02 crc kubenswrapper[4823]: Trace[918831767]: [14.460206237s] [14.460206237s] END Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.052226 4823 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.052238 4823 trace.go:236] Trace[1592285115]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 14:46:47.324) (total time: 14727ms): Jan 26 14:47:02 crc kubenswrapper[4823]: Trace[1592285115]: ---"Objects listed" error: 14727ms (14:47:02.052) Jan 26 14:47:02 crc kubenswrapper[4823]: Trace[1592285115]: [14.727700471s] [14.727700471s] END Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.052252 4823 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.055636 4823 trace.go:236] Trace[61594442]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Jan-2026 14:46:47.256) (total time: 14799ms): Jan 26 14:47:02 crc kubenswrapper[4823]: Trace[61594442]: ---"Objects listed" error: 14799ms (14:47:02.055) Jan 26 14:47:02 crc kubenswrapper[4823]: Trace[61594442]: [14.799318906s] [14.799318906s] END Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.055668 4823 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.517529 4823 apiserver.go:52] "Watching apiserver" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.521123 4823 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.521324 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.521612 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.521695 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.521741 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.521856 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.522030 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.522194 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.522243 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.522375 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.522557 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.530769 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 13:03:23.16345671 +0000 UTC Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.531032 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.531123 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.531160 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.531152 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.531032 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.531224 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.531336 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.531389 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.533235 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.561711 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.586734 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.611613 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.616074 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.618671 4823 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.619247 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.624658 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.631911 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.646081 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653683 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653727 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653745 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653762 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653784 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653801 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653820 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653836 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653853 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653870 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653885 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653900 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.654003 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.654217 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.654318 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.654460 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.654472 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.654521 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.654648 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.654744 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.653958 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.654813 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.654936 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.654963 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655013 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655046 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655114 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655139 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655165 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655224 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655168 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655444 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655437 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655507 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655248 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655561 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655575 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655592 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655611 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655632 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655649 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.655821 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.656006 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.656390 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.656415 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.656466 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.656492 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.656463 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.656567 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.656658 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.657016 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.657172 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.657209 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.657266 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.657664 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.657774 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.657286 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.657845 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.657879 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658203 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658254 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658243 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658223 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658327 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658350 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658397 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658451 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658475 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658500 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658523 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658543 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658561 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658578 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658638 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658661 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658678 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658697 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658780 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658794 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658799 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658804 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658818 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658828 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658839 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658890 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658910 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658898 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658894 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.658979 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659029 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659041 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.659067 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:47:03.159045675 +0000 UTC m=+19.844508780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659091 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659096 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659105 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659116 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659135 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659170 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659178 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659194 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659215 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659233 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659240 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659249 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659267 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659294 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659310 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659324 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659339 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659356 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659386 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659401 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659415 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659431 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659429 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659447 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659464 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659471 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659473 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659481 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659529 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659531 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659568 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659574 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659599 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659625 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659651 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659676 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659695 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659713 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659729 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659744 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659760 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659780 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659795 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659812 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659828 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659845 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659863 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659884 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659906 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659928 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659950 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659970 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.659987 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660004 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660020 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660034 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660051 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660067 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660082 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660100 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660115 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660138 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660160 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660183 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660203 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660218 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660227 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660237 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660296 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660328 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660355 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660404 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660406 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660560 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660585 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660605 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660623 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660649 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660671 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660694 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660707 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660714 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660735 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660767 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660798 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660813 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660823 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660881 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660906 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.660928 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661023 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661039 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661106 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661176 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661200 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661264 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661283 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661299 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661318 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661334 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661350 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661389 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661405 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661421 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661437 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661453 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661469 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661484 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661501 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661518 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661561 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661577 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661592 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661608 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661746 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661766 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661782 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661798 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661816 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661836 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661854 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661871 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661888 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661904 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661920 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661937 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661955 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661971 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.661987 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.662002 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.662021 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.662038 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.662053 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.662802 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.662837 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.662860 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.662887 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.662913 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.662925 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.662937 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663125 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663157 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663188 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663215 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663242 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663268 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663296 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663320 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663967 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664027 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664081 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664111 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664139 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664166 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664200 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664229 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664256 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664342 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664390 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664412 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664466 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664496 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664522 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664538 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664555 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664577 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664617 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664635 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664659 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664682 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664701 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664721 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664748 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664779 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664882 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664900 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664914 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664927 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664939 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664953 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664976 4823 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664992 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665005 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665017 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665030 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665046 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665057 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665070 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665082 4823 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665095 4823 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665110 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665124 4823 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665139 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665154 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665168 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665180 4823 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665195 4823 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665209 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665221 4823 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665233 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665245 4823 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665258 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665270 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665287 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665301 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665315 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665329 4823 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665342 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665357 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665396 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665409 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665421 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665432 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665444 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665456 4823 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665467 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665480 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665492 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665505 4823 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665517 4823 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665531 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665543 4823 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665554 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665441 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665581 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665602 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665617 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665633 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665646 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665660 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665675 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663024 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663120 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663207 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663253 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663255 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.666735 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663272 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663523 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663651 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663664 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663728 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663764 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.663988 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664074 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664346 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664379 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664554 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664773 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664915 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.664969 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665038 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665096 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665148 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665173 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665287 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665299 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665320 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665483 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665497 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.665342 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.667069 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.667089 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.667546 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.667320 4823 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.667737 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.667984 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.668092 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.668150 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:03.168130249 +0000 UTC m=+19.853593354 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.668306 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.668524 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.668914 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.668980 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.669017 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.669336 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.669470 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.669534 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.669555 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.670521 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.670568 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.670619 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.670776 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.670801 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.672093 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.674273 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.674300 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:03.174235103 +0000 UTC m=+19.859698208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.673698 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.673838 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.673982 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.674063 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.674109 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.674707 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.674639 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.674012 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.676179 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.668091 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.677714 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.678495 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.678559 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.678827 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.678859 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.678936 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.679206 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.679864 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.680421 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.678251 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.680727 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.682672 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.683057 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.683204 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.683310 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.683424 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.683470 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.693175 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.693752 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.693787 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.693803 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.693889 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:03.19386632 +0000 UTC m=+19.879329425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.695126 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.695152 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.695167 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:02 crc kubenswrapper[4823]: E0126 14:47:02.695261 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:03.195241368 +0000 UTC m=+19.880704473 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.695260 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.695700 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.696377 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.696993 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.698487 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.700391 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.700910 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.702166 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.702182 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.701102 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.701169 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.701574 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.701814 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.702546 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.702906 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.703007 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.704013 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.704092 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.704252 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.704396 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.704661 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.704698 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.705035 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.705826 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.706018 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.706088 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.706498 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.706597 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.706767 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.706906 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.707336 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.707571 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.707475 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.707519 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.707629 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.707806 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.708733 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.708792 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.716531 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.718075 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.718379 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.718461 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.722166 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.725445 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.725753 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.725812 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.725925 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.726048 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.726133 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.726726 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.727006 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.727131 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.727157 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.727208 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.727284 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.727734 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.728211 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.727969 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.728258 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.728984 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.728982 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.729083 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.729109 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.729144 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.729511 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.731696 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.731843 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.731919 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.732101 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.732525 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.734687 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.741880 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.742705 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.748670 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.751918 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.761500 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766351 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766410 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766471 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766482 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766491 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766502 4823 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766514 4823 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766525 4823 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766535 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766545 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766556 4823 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766567 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766576 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766584 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766592 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766601 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766610 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766618 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766628 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766637 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766676 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766685 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766693 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766704 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766715 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766725 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766733 4823 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766741 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766750 4823 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766758 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766766 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766775 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766811 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766836 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766844 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766866 4823 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766874 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766868 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766884 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766945 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766959 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766972 4823 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.766999 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767013 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767024 4823 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767036 4823 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767048 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767060 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767072 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767083 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767095 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767107 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767119 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767131 4823 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767142 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767153 4823 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767166 4823 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767180 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767193 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767216 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767228 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767239 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767251 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767262 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767273 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767284 4823 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767296 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767308 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767321 4823 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767347 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767384 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767481 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767526 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767538 4823 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767576 4823 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767588 4823 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767599 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767611 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767622 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767633 4823 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767644 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767655 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767665 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767678 4823 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767690 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767701 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767712 4823 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767722 4823 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767733 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767744 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767755 4823 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767766 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767776 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767787 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767797 4823 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767811 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767822 4823 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767833 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767844 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767856 4823 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767868 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767912 4823 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767921 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767929 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767937 4823 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767948 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767956 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767965 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767973 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767981 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767990 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.767999 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768010 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768022 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768032 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768043 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768055 4823 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768066 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768076 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768087 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768098 4823 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768110 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768121 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768132 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768141 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768149 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768159 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768168 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768176 4823 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768185 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768192 4823 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768201 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768210 4823 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768220 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768228 4823 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768238 4823 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768245 4823 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768254 4823 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768262 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768269 4823 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.768277 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.771771 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.774822 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.788536 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.800953 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.813410 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.823936 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.845736 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.850496 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.860416 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.863440 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 26 14:47:02 crc kubenswrapper[4823]: W0126 14:47:02.870508 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-0d3ee40952f921128ed195ee7ea301bacef223a363697e99c8d32a7b009f2014 WatchSource:0}: Error finding container 0d3ee40952f921128ed195ee7ea301bacef223a363697e99c8d32a7b009f2014: Status 404 returned error can't find the container with id 0d3ee40952f921128ed195ee7ea301bacef223a363697e99c8d32a7b009f2014 Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.878419 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 26 14:47:02 crc kubenswrapper[4823]: I0126 14:47:02.895845 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.170454 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.170594 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:47:04.170571974 +0000 UTC m=+20.856035079 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.170740 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.170828 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.170873 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:04.170865322 +0000 UTC m=+20.856328427 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.271503 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.271549 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.271574 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.271665 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.271707 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.271729 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.271740 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.271767 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:04.271744901 +0000 UTC m=+20.957208056 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.271680 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.271788 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.271796 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.271796 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:04.271785332 +0000 UTC m=+20.957248437 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:03 crc kubenswrapper[4823]: E0126 14:47:03.271830 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:04.271812033 +0000 UTC m=+20.957275188 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.531683 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:46:28.919214041 +0000 UTC Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.563009 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.563819 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.565043 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.565665 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.566606 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.567157 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.567794 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.568747 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.569383 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.570288 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.570788 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.571865 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.572359 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.572889 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.573806 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.574355 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.575320 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.575700 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.576335 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.576938 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.577940 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.578420 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.579522 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.580060 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.581703 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.582280 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.583018 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.584806 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.585272 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.586173 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.586707 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.587575 4823 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.587677 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.589458 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.590504 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.591185 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.592986 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.593646 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.594550 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.595167 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.596274 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.596926 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.596696 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.597929 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.598513 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.599461 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.599920 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.600767 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.601229 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.602270 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.602729 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.603603 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.604117 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.605108 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.605758 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.606255 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.607318 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.608974 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.612443 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.616339 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.623890 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.639052 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.650166 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.662714 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.673394 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.687769 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.694795 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"263169a057e71fb52f33f72a8d9137c4dfceabf4de752c9a79c7948775b3ae9a"} Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.696406 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee"} Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.696462 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd"} Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.696473 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a9428bd8e9fc3169b28a6e9f0d490cb615d23b6f4317b81469767bfe9443d43b"} Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.697632 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a"} Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.697665 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0d3ee40952f921128ed195ee7ea301bacef223a363697e99c8d32a7b009f2014"} Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.703933 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.715842 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.728051 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.745417 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.760809 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.773876 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.787517 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.801738 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.820547 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.836066 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.849281 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.863915 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.876558 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:03 crc kubenswrapper[4823]: I0126 14:47:03.894905 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:04 crc kubenswrapper[4823]: I0126 14:47:04.177578 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:04 crc kubenswrapper[4823]: I0126 14:47:04.177676 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.177789 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.177871 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:47:06.177823827 +0000 UTC m=+22.863286932 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.177922 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:06.177909939 +0000 UTC m=+22.863373264 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:04 crc kubenswrapper[4823]: I0126 14:47:04.278561 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:04 crc kubenswrapper[4823]: I0126 14:47:04.278635 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:04 crc kubenswrapper[4823]: I0126 14:47:04.278703 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.278786 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.278805 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.278804 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.278819 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.278820 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.278877 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:06.27885947 +0000 UTC m=+22.964322575 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.278909 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:06.278889341 +0000 UTC m=+22.964352446 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.278827 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.278942 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.278971 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:06.278964323 +0000 UTC m=+22.964427428 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:04 crc kubenswrapper[4823]: I0126 14:47:04.532823 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 11:52:07.538597513 +0000 UTC Jan 26 14:47:04 crc kubenswrapper[4823]: I0126 14:47:04.559317 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:04 crc kubenswrapper[4823]: I0126 14:47:04.559459 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:04 crc kubenswrapper[4823]: I0126 14:47:04.559527 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.559511 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.559596 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:04 crc kubenswrapper[4823]: E0126 14:47:04.560473 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:05 crc kubenswrapper[4823]: I0126 14:47:05.533943 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:04:19.112067496 +0000 UTC Jan 26 14:47:05 crc kubenswrapper[4823]: I0126 14:47:05.702588 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8"} Jan 26 14:47:05 crc kubenswrapper[4823]: I0126 14:47:05.714418 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:05 crc kubenswrapper[4823]: I0126 14:47:05.725542 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:05 crc kubenswrapper[4823]: I0126 14:47:05.737613 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:05 crc kubenswrapper[4823]: I0126 14:47:05.748725 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:05 crc kubenswrapper[4823]: I0126 14:47:05.761868 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:05 crc kubenswrapper[4823]: I0126 14:47:05.774129 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:05 crc kubenswrapper[4823]: I0126 14:47:05.806164 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:05 crc kubenswrapper[4823]: I0126 14:47:05.823882 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:05Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.194558 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.194635 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.194755 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.194823 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:47:10.194796019 +0000 UTC m=+26.880259154 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.194869 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:10.19485265 +0000 UTC m=+26.880315795 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.295997 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.296038 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.296057 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.296158 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.296175 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.296187 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.296230 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:10.296216972 +0000 UTC m=+26.981680077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.296325 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.296400 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.296419 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.296433 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.296445 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:10.296422418 +0000 UTC m=+26.981885553 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.296470 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:10.296458249 +0000 UTC m=+26.981921354 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.529745 4823 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.531121 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.531146 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.531154 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.531220 4823 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.534442 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:02:50.66793152 +0000 UTC Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.537016 4823 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.537182 4823 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.537877 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.537901 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.537910 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.537920 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.537929 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:06Z","lastTransitionTime":"2026-01-26T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.553010 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.556311 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.556346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.556356 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.556382 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.556392 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:06Z","lastTransitionTime":"2026-01-26T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.559395 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.559475 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.559505 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.559395 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.559568 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.559641 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.567493 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.570391 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.570419 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.570427 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.570441 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.570450 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:06Z","lastTransitionTime":"2026-01-26T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.581588 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.584961 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.585016 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.585030 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.585047 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.585056 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:06Z","lastTransitionTime":"2026-01-26T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.596533 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.599190 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.599236 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.599248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.599259 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.599268 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:06Z","lastTransitionTime":"2026-01-26T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.611037 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:06 crc kubenswrapper[4823]: E0126 14:47:06.611153 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.612306 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.612329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.612339 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.612352 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.612381 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:06Z","lastTransitionTime":"2026-01-26T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.714529 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.714576 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.714588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.714603 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.714614 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:06Z","lastTransitionTime":"2026-01-26T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.816920 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.816989 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.817002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.817019 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.817031 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:06Z","lastTransitionTime":"2026-01-26T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.919508 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.919554 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.919572 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.919599 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:06 crc kubenswrapper[4823]: I0126 14:47:06.919612 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:06Z","lastTransitionTime":"2026-01-26T14:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.021819 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.021865 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.021877 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.021895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.021906 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:07Z","lastTransitionTime":"2026-01-26T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.079414 4823 csr.go:261] certificate signing request csr-ss7nm is approved, waiting to be issued Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.104837 4823 csr.go:257] certificate signing request csr-ss7nm is issued Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.124599 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.124647 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.124657 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.124673 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.124684 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:07Z","lastTransitionTime":"2026-01-26T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.214721 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.222579 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-d69wh"] Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.222889 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d69wh" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.225910 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.227180 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.227216 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.227227 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.227245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.227258 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:07Z","lastTransitionTime":"2026-01-26T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.228178 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.229787 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.230052 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.235938 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.244707 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.255951 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.269485 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.280859 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.296680 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.305559 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a32f9039-ae4f-4825-b1d4-3a1349d56d7f-serviceca\") pod \"node-ca-d69wh\" (UID: \"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\") " pod="openshift-image-registry/node-ca-d69wh" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.305627 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a32f9039-ae4f-4825-b1d4-3a1349d56d7f-host\") pod \"node-ca-d69wh\" (UID: \"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\") " pod="openshift-image-registry/node-ca-d69wh" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.305662 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd95n\" (UniqueName: \"kubernetes.io/projected/a32f9039-ae4f-4825-b1d4-3a1349d56d7f-kube-api-access-qd95n\") pod \"node-ca-d69wh\" (UID: \"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\") " pod="openshift-image-registry/node-ca-d69wh" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.317614 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.329183 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.329299 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.329311 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.329326 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.329336 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:07Z","lastTransitionTime":"2026-01-26T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.333760 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.348486 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.362011 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.375393 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.387489 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.394015 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.406570 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.406797 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd95n\" (UniqueName: \"kubernetes.io/projected/a32f9039-ae4f-4825-b1d4-3a1349d56d7f-kube-api-access-qd95n\") pod \"node-ca-d69wh\" (UID: \"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\") " pod="openshift-image-registry/node-ca-d69wh" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.406865 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a32f9039-ae4f-4825-b1d4-3a1349d56d7f-serviceca\") pod \"node-ca-d69wh\" (UID: \"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\") " pod="openshift-image-registry/node-ca-d69wh" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.406906 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a32f9039-ae4f-4825-b1d4-3a1349d56d7f-host\") pod \"node-ca-d69wh\" (UID: \"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\") " pod="openshift-image-registry/node-ca-d69wh" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.406963 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a32f9039-ae4f-4825-b1d4-3a1349d56d7f-host\") pod \"node-ca-d69wh\" (UID: \"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\") " pod="openshift-image-registry/node-ca-d69wh" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.408340 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a32f9039-ae4f-4825-b1d4-3a1349d56d7f-serviceca\") pod \"node-ca-d69wh\" (UID: \"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\") " pod="openshift-image-registry/node-ca-d69wh" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.424289 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.430675 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd95n\" (UniqueName: \"kubernetes.io/projected/a32f9039-ae4f-4825-b1d4-3a1349d56d7f-kube-api-access-qd95n\") pod \"node-ca-d69wh\" (UID: \"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\") " pod="openshift-image-registry/node-ca-d69wh" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.431213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.431247 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.431258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.431274 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.431285 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:07Z","lastTransitionTime":"2026-01-26T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.438406 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.450880 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.471083 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.480984 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.533178 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-d69wh" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.533543 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.533584 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.533598 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.533616 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.533627 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:07Z","lastTransitionTime":"2026-01-26T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.535209 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 01:15:37.124490995 +0000 UTC Jan 26 14:47:07 crc kubenswrapper[4823]: W0126 14:47:07.551129 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda32f9039_ae4f_4825_b1d4_3a1349d56d7f.slice/crio-ecf83e09c4919321bad425a889f5396270ccf205672299442d42edb3fa9b19b3 WatchSource:0}: Error finding container ecf83e09c4919321bad425a889f5396270ccf205672299442d42edb3fa9b19b3: Status 404 returned error can't find the container with id ecf83e09c4919321bad425a889f5396270ccf205672299442d42edb3fa9b19b3 Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.640984 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.641567 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.641632 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.641663 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.641688 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:07Z","lastTransitionTime":"2026-01-26T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.671470 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-bfxnx"] Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.671815 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bfxnx" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.673848 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.673939 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.674138 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.686088 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.708154 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d69wh" event={"ID":"a32f9039-ae4f-4825-b1d4-3a1349d56d7f","Type":"ContainerStarted","Data":"ecf83e09c4919321bad425a889f5396270ccf205672299442d42edb3fa9b19b3"} Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.713995 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: E0126 14:47:07.716681 4823 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.731343 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.744921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.744955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.744966 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.744984 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.744995 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:07Z","lastTransitionTime":"2026-01-26T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.747889 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.767760 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.788034 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.808508 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.809860 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8dz5\" (UniqueName: \"kubernetes.io/projected/ec2a580e-bcb0-478f-9230-c8d40b4748d5-kube-api-access-w8dz5\") pod \"node-resolver-bfxnx\" (UID: \"ec2a580e-bcb0-478f-9230-c8d40b4748d5\") " pod="openshift-dns/node-resolver-bfxnx" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.809914 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec2a580e-bcb0-478f-9230-c8d40b4748d5-hosts-file\") pod \"node-resolver-bfxnx\" (UID: \"ec2a580e-bcb0-478f-9230-c8d40b4748d5\") " pod="openshift-dns/node-resolver-bfxnx" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.820413 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.843435 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.847385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.847428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.847439 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.847456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.847467 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:07Z","lastTransitionTime":"2026-01-26T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.858737 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.869058 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:07Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.910871 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8dz5\" (UniqueName: \"kubernetes.io/projected/ec2a580e-bcb0-478f-9230-c8d40b4748d5-kube-api-access-w8dz5\") pod \"node-resolver-bfxnx\" (UID: \"ec2a580e-bcb0-478f-9230-c8d40b4748d5\") " pod="openshift-dns/node-resolver-bfxnx" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.910927 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec2a580e-bcb0-478f-9230-c8d40b4748d5-hosts-file\") pod \"node-resolver-bfxnx\" (UID: \"ec2a580e-bcb0-478f-9230-c8d40b4748d5\") " pod="openshift-dns/node-resolver-bfxnx" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.910984 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ec2a580e-bcb0-478f-9230-c8d40b4748d5-hosts-file\") pod \"node-resolver-bfxnx\" (UID: \"ec2a580e-bcb0-478f-9230-c8d40b4748d5\") " pod="openshift-dns/node-resolver-bfxnx" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.925983 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8dz5\" (UniqueName: \"kubernetes.io/projected/ec2a580e-bcb0-478f-9230-c8d40b4748d5-kube-api-access-w8dz5\") pod \"node-resolver-bfxnx\" (UID: \"ec2a580e-bcb0-478f-9230-c8d40b4748d5\") " pod="openshift-dns/node-resolver-bfxnx" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.949818 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.949865 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.949878 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.949897 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.949910 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:07Z","lastTransitionTime":"2026-01-26T14:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:07 crc kubenswrapper[4823]: I0126 14:47:07.989733 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bfxnx" Jan 26 14:47:08 crc kubenswrapper[4823]: W0126 14:47:08.001481 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec2a580e_bcb0_478f_9230_c8d40b4748d5.slice/crio-a4fb4d184d3b8122ce0725aa3582e3cff9fe5c1435ec955bc56c69854c5d5c4c WatchSource:0}: Error finding container a4fb4d184d3b8122ce0725aa3582e3cff9fe5c1435ec955bc56c69854c5d5c4c: Status 404 returned error can't find the container with id a4fb4d184d3b8122ce0725aa3582e3cff9fe5c1435ec955bc56c69854c5d5c4c Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.052540 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.052585 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.052596 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.052613 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.052622 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:08Z","lastTransitionTime":"2026-01-26T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.069076 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-kv6z2"] Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.069433 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-p555f"] Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.069573 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-zlr4w"] Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.069631 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.069796 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.070496 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.073302 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kpz7g"] Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.073991 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.074200 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.074238 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.074264 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.074486 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.075122 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.075496 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.075724 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.075783 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.075779 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.076112 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.076235 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.076408 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.078800 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.079207 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.080000 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.080052 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.080400 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.080450 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.080960 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.088977 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.106088 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.106294 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-26 14:42:07 +0000 UTC, rotation deadline is 2026-10-15 18:39:07.338640818 +0000 UTC Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.106342 4823 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6291h51m59.23230219s for next certificate rotation Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.125196 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.147036 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.155470 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.155512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.155534 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.155555 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.155567 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:08Z","lastTransitionTime":"2026-01-26T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.158075 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.171169 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.187623 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.199434 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213047 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213092 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-system-cni-dir\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213116 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-cni-binary-copy\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213165 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1364c14e-d1f9-422e-bbee-efd99f5f2271-cnibin\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213191 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf2sr\" (UniqueName: \"kubernetes.io/projected/232a66a2-55bb-44f6-81a0-383432fbf1d5-kube-api-access-nf2sr\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213218 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-var-lib-cni-multus\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213242 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-var-lib-openvswitch\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213264 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-var-lib-kubelet\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213285 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovnkube-script-lib\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213312 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-node-log\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213337 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-cni-netd\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213379 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-os-release\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213404 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-etc-kubernetes\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213427 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d-mcd-auth-proxy-config\") pod \"machine-config-daemon-kv6z2\" (UID: \"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\") " pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213450 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-systemd\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213472 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-log-socket\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213494 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-env-overrides\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213516 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213538 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-multus-socket-dir-parent\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213562 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-kubelet\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213586 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-openvswitch\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213621 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-run-multus-certs\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213658 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d-proxy-tls\") pod \"machine-config-daemon-kv6z2\" (UID: \"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\") " pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213682 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1364c14e-d1f9-422e-bbee-efd99f5f2271-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213763 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-cni-bin\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213787 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-hostroot\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213835 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-systemd-units\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213861 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovnkube-config\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213883 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-run-netns\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213905 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-var-lib-cni-bin\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213925 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d-rootfs\") pod \"machine-config-daemon-kv6z2\" (UID: \"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\") " pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213946 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1364c14e-d1f9-422e-bbee-efd99f5f2271-os-release\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213966 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovn-node-metrics-cert\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.213988 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-cnibin\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214011 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1364c14e-d1f9-422e-bbee-efd99f5f2271-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214031 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-etc-openvswitch\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214052 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-multus-conf-dir\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214072 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d5zm\" (UniqueName: \"kubernetes.io/projected/1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d-kube-api-access-7d5zm\") pod \"machine-config-daemon-kv6z2\" (UID: \"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\") " pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214094 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1364c14e-d1f9-422e-bbee-efd99f5f2271-system-cni-dir\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214125 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-run-netns\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214148 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-run-k8s-cni-cncf-io\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214174 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2488b\" (UniqueName: \"kubernetes.io/projected/1364c14e-d1f9-422e-bbee-efd99f5f2271-kube-api-access-2488b\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214314 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-ovn\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214376 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-multus-cni-dir\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214434 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-slash\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214480 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-multus-daemon-config\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214504 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z4t6\" (UniqueName: \"kubernetes.io/projected/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-kube-api-access-9z4t6\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.214530 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1364c14e-d1f9-422e-bbee-efd99f5f2271-cni-binary-copy\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.218805 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.236606 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.250199 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.258521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.258560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.258578 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.258605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.258618 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:08Z","lastTransitionTime":"2026-01-26T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.285051 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.302392 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315067 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-ovn\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315123 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-multus-cni-dir\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315152 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-slash\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315174 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-multus-daemon-config\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315189 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z4t6\" (UniqueName: \"kubernetes.io/projected/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-kube-api-access-9z4t6\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315205 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1364c14e-d1f9-422e-bbee-efd99f5f2271-cni-binary-copy\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315222 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315236 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-system-cni-dir\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315250 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-cni-binary-copy\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315268 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1364c14e-d1f9-422e-bbee-efd99f5f2271-cnibin\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315284 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf2sr\" (UniqueName: \"kubernetes.io/projected/232a66a2-55bb-44f6-81a0-383432fbf1d5-kube-api-access-nf2sr\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315301 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-var-lib-cni-multus\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315316 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-var-lib-openvswitch\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315329 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-var-lib-kubelet\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315346 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovnkube-script-lib\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315382 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-node-log\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315396 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-cni-netd\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315412 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-os-release\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315427 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-etc-kubernetes\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315444 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d-mcd-auth-proxy-config\") pod \"machine-config-daemon-kv6z2\" (UID: \"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\") " pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315508 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-systemd\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315523 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-log-socket\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315540 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-env-overrides\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315555 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-multus-socket-dir-parent\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315573 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315592 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-kubelet\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315610 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-openvswitch\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315626 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-run-multus-certs\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315642 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d-proxy-tls\") pod \"machine-config-daemon-kv6z2\" (UID: \"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\") " pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315657 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1364c14e-d1f9-422e-bbee-efd99f5f2271-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315673 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-cni-bin\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315687 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-hostroot\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315711 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-systemd-units\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315728 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovnkube-config\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315744 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-run-netns\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315760 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-var-lib-cni-bin\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315790 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d-rootfs\") pod \"machine-config-daemon-kv6z2\" (UID: \"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\") " pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315807 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1364c14e-d1f9-422e-bbee-efd99f5f2271-os-release\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315823 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovn-node-metrics-cert\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315836 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-cnibin\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315851 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1364c14e-d1f9-422e-bbee-efd99f5f2271-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.315871 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-multus-conf-dir\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316037 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d5zm\" (UniqueName: \"kubernetes.io/projected/1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d-kube-api-access-7d5zm\") pod \"machine-config-daemon-kv6z2\" (UID: \"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\") " pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316062 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1364c14e-d1f9-422e-bbee-efd99f5f2271-system-cni-dir\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316078 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-etc-openvswitch\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316298 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-run-netns\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316423 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-run-k8s-cni-cncf-io\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316441 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2488b\" (UniqueName: \"kubernetes.io/projected/1364c14e-d1f9-422e-bbee-efd99f5f2271-kube-api-access-2488b\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316494 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-etc-openvswitch\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316588 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-ovn\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316748 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-run-k8s-cni-cncf-io\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316776 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-run-netns\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316795 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-multus-cni-dir\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316852 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-multus-socket-dir-parent\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316887 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316927 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-kubelet\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316962 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-openvswitch\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.316997 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-run-multus-certs\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.317417 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-env-overrides\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.317458 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-slash\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.317627 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-var-lib-openvswitch\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.317643 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d-rootfs\") pod \"machine-config-daemon-kv6z2\" (UID: \"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\") " pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.317716 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-os-release\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.317808 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-var-lib-kubelet\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.317854 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-cni-bin\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.317987 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-hostroot\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318030 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-systemd-units\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318082 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-etc-kubernetes\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318148 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1364c14e-d1f9-422e-bbee-efd99f5f2271-os-release\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318135 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-multus-conf-dir\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318101 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-node-log\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318207 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1364c14e-d1f9-422e-bbee-efd99f5f2271-system-cni-dir\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318237 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-cnibin\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318258 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318278 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-cni-netd\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318241 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-var-lib-cni-bin\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318308 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1364c14e-d1f9-422e-bbee-efd99f5f2271-cnibin\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318272 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-log-socket\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318264 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-run-netns\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318327 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-host-var-lib-cni-multus\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318385 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-systemd\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318414 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/1364c14e-d1f9-422e-bbee-efd99f5f2271-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318445 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1364c14e-d1f9-422e-bbee-efd99f5f2271-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318470 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-system-cni-dir\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318847 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1364c14e-d1f9-422e-bbee-efd99f5f2271-cni-binary-copy\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.318891 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-cni-binary-copy\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.319228 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovnkube-script-lib\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.319247 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d-mcd-auth-proxy-config\") pod \"machine-config-daemon-kv6z2\" (UID: \"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\") " pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.319322 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovnkube-config\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.319567 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-multus-daemon-config\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.322974 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d-proxy-tls\") pod \"machine-config-daemon-kv6z2\" (UID: \"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\") " pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.322983 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovn-node-metrics-cert\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.340615 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z4t6\" (UniqueName: \"kubernetes.io/projected/6e7853ce-0557-452f-b7ae-cc549bf8e2ae-kube-api-access-9z4t6\") pod \"multus-p555f\" (UID: \"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\") " pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.351186 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2488b\" (UniqueName: \"kubernetes.io/projected/1364c14e-d1f9-422e-bbee-efd99f5f2271-kube-api-access-2488b\") pod \"multus-additional-cni-plugins-zlr4w\" (UID: \"1364c14e-d1f9-422e-bbee-efd99f5f2271\") " pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.355242 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.357133 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d5zm\" (UniqueName: \"kubernetes.io/projected/1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d-kube-api-access-7d5zm\") pod \"machine-config-daemon-kv6z2\" (UID: \"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\") " pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.360354 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf2sr\" (UniqueName: \"kubernetes.io/projected/232a66a2-55bb-44f6-81a0-383432fbf1d5-kube-api-access-nf2sr\") pod \"ovnkube-node-kpz7g\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.362841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.362877 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.362886 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.362903 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.362912 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:08Z","lastTransitionTime":"2026-01-26T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.385342 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.392000 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-p555f" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.399712 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" Jan 26 14:47:08 crc kubenswrapper[4823]: W0126 14:47:08.406989 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e7853ce_0557_452f_b7ae_cc549bf8e2ae.slice/crio-ab5fc436bbb9bf7a57a726bd241b134e221412a337a90370307b32e88a1dec87 WatchSource:0}: Error finding container ab5fc436bbb9bf7a57a726bd241b134e221412a337a90370307b32e88a1dec87: Status 404 returned error can't find the container with id ab5fc436bbb9bf7a57a726bd241b134e221412a337a90370307b32e88a1dec87 Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.409668 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.410732 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:08 crc kubenswrapper[4823]: W0126 14:47:08.428727 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1364c14e_d1f9_422e_bbee_efd99f5f2271.slice/crio-0e11b636b840a0ad4c81c7b6fbbd4b226cf2fdfbe432b1c2e0a94ec15398919d WatchSource:0}: Error finding container 0e11b636b840a0ad4c81c7b6fbbd4b226cf2fdfbe432b1c2e0a94ec15398919d: Status 404 returned error can't find the container with id 0e11b636b840a0ad4c81c7b6fbbd4b226cf2fdfbe432b1c2e0a94ec15398919d Jan 26 14:47:08 crc kubenswrapper[4823]: W0126 14:47:08.436727 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod232a66a2_55bb_44f6_81a0_383432fbf1d5.slice/crio-3b73cb140c99dc12c7aa1208e30aa51e378d517b478ebb3be9d4a2d3f7717c83 WatchSource:0}: Error finding container 3b73cb140c99dc12c7aa1208e30aa51e378d517b478ebb3be9d4a2d3f7717c83: Status 404 returned error can't find the container with id 3b73cb140c99dc12c7aa1208e30aa51e378d517b478ebb3be9d4a2d3f7717c83 Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.461190 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.464644 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.464677 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.464690 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.464704 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.464713 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:08Z","lastTransitionTime":"2026-01-26T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.493276 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.514533 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.533953 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.536832 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 23:18:44.615077971 +0000 UTC Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.544879 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.558983 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.559297 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.559340 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.559297 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:08 crc kubenswrapper[4823]: E0126 14:47:08.559445 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:08 crc kubenswrapper[4823]: E0126 14:47:08.559558 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:08 crc kubenswrapper[4823]: E0126 14:47:08.559689 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.566542 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.566567 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.566576 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.566588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.566609 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:08Z","lastTransitionTime":"2026-01-26T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.572439 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.583452 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.602438 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.615280 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.627308 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.639217 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.669456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.669494 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.669505 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.669522 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.669532 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:08Z","lastTransitionTime":"2026-01-26T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.712015 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332" exitCode=0 Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.712099 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.712138 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"3b73cb140c99dc12c7aa1208e30aa51e378d517b478ebb3be9d4a2d3f7717c83"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.713594 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p555f" event={"ID":"6e7853ce-0557-452f-b7ae-cc549bf8e2ae","Type":"ContainerStarted","Data":"f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.713661 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p555f" event={"ID":"6e7853ce-0557-452f-b7ae-cc549bf8e2ae","Type":"ContainerStarted","Data":"ab5fc436bbb9bf7a57a726bd241b134e221412a337a90370307b32e88a1dec87"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.722184 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bfxnx" event={"ID":"ec2a580e-bcb0-478f-9230-c8d40b4748d5","Type":"ContainerStarted","Data":"220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.722231 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bfxnx" event={"ID":"ec2a580e-bcb0-478f-9230-c8d40b4748d5","Type":"ContainerStarted","Data":"a4fb4d184d3b8122ce0725aa3582e3cff9fe5c1435ec955bc56c69854c5d5c4c"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.726626 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" event={"ID":"1364c14e-d1f9-422e-bbee-efd99f5f2271","Type":"ContainerStarted","Data":"fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.726677 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" event={"ID":"1364c14e-d1f9-422e-bbee-efd99f5f2271","Type":"ContainerStarted","Data":"0e11b636b840a0ad4c81c7b6fbbd4b226cf2fdfbe432b1c2e0a94ec15398919d"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.728303 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.728349 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.728381 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"362ca60c179eec145a67489c79a0b2c29dc16e10a30540e6026d3f6d1acca96e"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.729339 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-d69wh" event={"ID":"a32f9039-ae4f-4825-b1d4-3a1349d56d7f","Type":"ContainerStarted","Data":"ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.740436 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.755878 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.766611 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.771396 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.771422 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.771430 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.771444 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.771454 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:08Z","lastTransitionTime":"2026-01-26T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.783755 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.796425 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.807732 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.823054 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.833571 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.844773 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.858305 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.875759 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.876993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.877040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.877054 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.877071 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.877081 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:08Z","lastTransitionTime":"2026-01-26T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.891238 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.907810 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.921660 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.938564 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.958967 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.970910 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.979838 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.979867 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.979877 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.979891 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.979899 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:08Z","lastTransitionTime":"2026-01-26T14:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:08 crc kubenswrapper[4823]: I0126 14:47:08.982521 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.004457 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.016214 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.027541 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.045678 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.058718 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.070498 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.082566 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.082621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.082631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.082646 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.082658 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:09Z","lastTransitionTime":"2026-01-26T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.105166 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.143321 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.185426 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.185468 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.185479 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.185496 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.185508 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:09Z","lastTransitionTime":"2026-01-26T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.186287 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.226235 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.266153 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.288043 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.288114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.288130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.288154 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.288220 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:09Z","lastTransitionTime":"2026-01-26T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.313782 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.390150 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.390203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.390220 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.390236 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.390245 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:09Z","lastTransitionTime":"2026-01-26T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.493433 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.493484 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.493495 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.493555 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.493568 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:09Z","lastTransitionTime":"2026-01-26T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.537165 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 08:21:25.712046127 +0000 UTC Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.595572 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.595608 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.595617 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.595631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.595643 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:09Z","lastTransitionTime":"2026-01-26T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.697946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.697988 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.697999 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.698016 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.698027 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:09Z","lastTransitionTime":"2026-01-26T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.735839 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.735890 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.735903 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.735916 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.735928 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.735942 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.737297 4823 generic.go:334] "Generic (PLEG): container finished" podID="1364c14e-d1f9-422e-bbee-efd99f5f2271" containerID="fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6" exitCode=0 Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.737378 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" event={"ID":"1364c14e-d1f9-422e-bbee-efd99f5f2271","Type":"ContainerDied","Data":"fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.751603 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.767525 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.779651 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.788279 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.798149 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.799832 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.799873 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.799886 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.799903 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.799914 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:09Z","lastTransitionTime":"2026-01-26T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.811166 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.823829 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.835789 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.850914 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.864917 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.877284 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.898581 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.903884 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.903925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.903934 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.903949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.903959 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:09Z","lastTransitionTime":"2026-01-26T14:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.910449 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.920330 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:09 crc kubenswrapper[4823]: I0126 14:47:09.940028 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:09Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.005528 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.005573 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.005585 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.005602 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.005615 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:10Z","lastTransitionTime":"2026-01-26T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.108097 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.108148 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.108160 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.108177 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.108189 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:10Z","lastTransitionTime":"2026-01-26T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.210805 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.210849 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.210860 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.210875 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.210886 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:10Z","lastTransitionTime":"2026-01-26T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.238426 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.238657 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:47:18.238622677 +0000 UTC m=+34.924085782 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.238733 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.238899 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.238957 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:18.238945236 +0000 UTC m=+34.924408341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.313254 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.313305 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.313344 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.313376 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.313388 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:10Z","lastTransitionTime":"2026-01-26T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.339769 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.339810 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.339837 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.339910 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.339942 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.339955 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.339964 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.339976 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.340009 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.340020 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.339985 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:18.339969989 +0000 UTC m=+35.025433084 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.340086 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:18.340071212 +0000 UTC m=+35.025534317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.340097 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:18.340092733 +0000 UTC m=+35.025555838 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.415752 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.415783 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.415793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.415808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.415818 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:10Z","lastTransitionTime":"2026-01-26T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.518169 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.518209 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.518219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.518237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.518249 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:10Z","lastTransitionTime":"2026-01-26T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.538591 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:36:29.3238219 +0000 UTC Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.559257 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.559312 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.559395 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.559421 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.559556 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:10 crc kubenswrapper[4823]: E0126 14:47:10.559700 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.620426 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.620846 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.621119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.621203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.621289 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:10Z","lastTransitionTime":"2026-01-26T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.724133 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.724414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.724511 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.724585 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.724643 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:10Z","lastTransitionTime":"2026-01-26T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.742032 4823 generic.go:334] "Generic (PLEG): container finished" podID="1364c14e-d1f9-422e-bbee-efd99f5f2271" containerID="9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973" exitCode=0 Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.742077 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" event={"ID":"1364c14e-d1f9-422e-bbee-efd99f5f2271","Type":"ContainerDied","Data":"9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973"} Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.756271 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.769602 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.782152 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.800947 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.810898 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.820903 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.826319 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.826349 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.826370 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.826383 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.826393 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:10Z","lastTransitionTime":"2026-01-26T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.838916 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.851259 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.862749 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.875378 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.884914 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.897488 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.911494 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.926271 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.928483 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.928519 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.928531 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.928549 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.928562 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:10Z","lastTransitionTime":"2026-01-26T14:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:10 crc kubenswrapper[4823]: I0126 14:47:10.939760 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:10Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.031090 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.031121 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.031128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.031142 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.031151 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:11Z","lastTransitionTime":"2026-01-26T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.133562 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.133610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.133621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.133638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.133648 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:11Z","lastTransitionTime":"2026-01-26T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.236192 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.236227 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.236236 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.236252 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.236264 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:11Z","lastTransitionTime":"2026-01-26T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.338156 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.338192 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.338201 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.338216 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.338230 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:11Z","lastTransitionTime":"2026-01-26T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.440151 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.440192 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.440202 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.440219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.440229 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:11Z","lastTransitionTime":"2026-01-26T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.539199 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:40:00.099310585 +0000 UTC Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.543468 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.543498 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.543507 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.543524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.543537 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:11Z","lastTransitionTime":"2026-01-26T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.645736 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.645775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.645786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.645803 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.645812 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:11Z","lastTransitionTime":"2026-01-26T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.746868 4823 generic.go:334] "Generic (PLEG): container finished" podID="1364c14e-d1f9-422e-bbee-efd99f5f2271" containerID="69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010" exitCode=0 Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.746906 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.746920 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" event={"ID":"1364c14e-d1f9-422e-bbee-efd99f5f2271","Type":"ContainerDied","Data":"69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010"} Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.746937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.746949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.746964 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.746978 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:11Z","lastTransitionTime":"2026-01-26T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.762256 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.776633 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.790019 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.812134 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.823648 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.840387 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.850027 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.850069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.850081 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.850098 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.850108 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:11Z","lastTransitionTime":"2026-01-26T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.852451 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.864138 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.880768 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.891114 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.905833 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.919985 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.931236 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.944906 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.952607 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.952641 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.952651 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.952667 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.952675 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:11Z","lastTransitionTime":"2026-01-26T14:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:11 crc kubenswrapper[4823]: I0126 14:47:11.957979 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:11Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.056035 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.056078 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.056087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.056103 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.056113 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:12Z","lastTransitionTime":"2026-01-26T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.158044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.158084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.158096 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.158113 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.158125 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:12Z","lastTransitionTime":"2026-01-26T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.260761 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.260793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.260802 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.260828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.260837 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:12Z","lastTransitionTime":"2026-01-26T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.362798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.362843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.362853 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.362869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.362880 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:12Z","lastTransitionTime":"2026-01-26T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.465534 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.465571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.465581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.465597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.465608 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:12Z","lastTransitionTime":"2026-01-26T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.539587 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 10:44:04.64693833 +0000 UTC Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.559890 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.559886 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:12 crc kubenswrapper[4823]: E0126 14:47:12.560009 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.559905 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:12 crc kubenswrapper[4823]: E0126 14:47:12.560230 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:12 crc kubenswrapper[4823]: E0126 14:47:12.560305 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.567611 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.567638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.567647 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.567661 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.567669 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:12Z","lastTransitionTime":"2026-01-26T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.669400 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.669425 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.669442 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.669464 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.669482 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:12Z","lastTransitionTime":"2026-01-26T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.752984 4823 generic.go:334] "Generic (PLEG): container finished" podID="1364c14e-d1f9-422e-bbee-efd99f5f2271" containerID="d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0" exitCode=0 Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.753064 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" event={"ID":"1364c14e-d1f9-422e-bbee-efd99f5f2271","Type":"ContainerDied","Data":"d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0"} Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.758021 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6"} Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.772157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.772193 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.772203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.772219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.772229 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:12Z","lastTransitionTime":"2026-01-26T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.772530 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.784090 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.796844 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.807542 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.818140 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.834050 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.846030 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.860275 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.872694 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.875405 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.875444 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.875456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.875474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.875486 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:12Z","lastTransitionTime":"2026-01-26T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.887524 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.898166 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.914627 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.923869 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.931208 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.947040 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:12Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.978098 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.978132 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.978141 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.978159 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:12 crc kubenswrapper[4823]: I0126 14:47:12.978173 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:12Z","lastTransitionTime":"2026-01-26T14:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.080578 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.080621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.080635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.080652 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.080662 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:13Z","lastTransitionTime":"2026-01-26T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.183202 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.183296 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.183310 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.183401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.183415 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:13Z","lastTransitionTime":"2026-01-26T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.285534 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.285577 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.285588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.285606 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.285618 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:13Z","lastTransitionTime":"2026-01-26T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.387794 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.387832 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.387842 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.387855 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.387864 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:13Z","lastTransitionTime":"2026-01-26T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.442266 4823 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.490401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.490444 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.490452 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.490466 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.490476 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:13Z","lastTransitionTime":"2026-01-26T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.540115 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 14:10:56.010120568 +0000 UTC Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.570131 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.584969 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.593223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.593273 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.593285 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.593302 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.593313 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:13Z","lastTransitionTime":"2026-01-26T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.597818 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.608074 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.618561 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.627934 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.646095 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.676924 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.692625 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.695000 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.695020 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.695028 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.695041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.695050 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:13Z","lastTransitionTime":"2026-01-26T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.704467 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.713879 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.730167 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.742121 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.751544 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.762817 4823 generic.go:334] "Generic (PLEG): container finished" podID="1364c14e-d1f9-422e-bbee-efd99f5f2271" containerID="2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e" exitCode=0 Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.762856 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" event={"ID":"1364c14e-d1f9-422e-bbee-efd99f5f2271","Type":"ContainerDied","Data":"2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e"} Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.770751 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.784940 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.797535 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.797806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.797831 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.797841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.797855 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.797864 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:13Z","lastTransitionTime":"2026-01-26T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.812540 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.827575 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.840744 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.860583 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.873574 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.888344 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.900403 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.900444 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.900457 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.900474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.900485 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:13Z","lastTransitionTime":"2026-01-26T14:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.906304 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.920858 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.933307 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.944215 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.956187 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.968846 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:13 crc kubenswrapper[4823]: I0126 14:47:13.979968 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.003059 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.003092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.003101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.003114 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.003123 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:14Z","lastTransitionTime":"2026-01-26T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.106408 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.106980 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.107068 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.107170 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.107231 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:14Z","lastTransitionTime":"2026-01-26T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.210163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.210201 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.210211 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.210279 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.210289 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:14Z","lastTransitionTime":"2026-01-26T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.313023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.313051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.313060 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.313073 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.313084 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:14Z","lastTransitionTime":"2026-01-26T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.415470 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.415510 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.415520 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.415535 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.415545 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:14Z","lastTransitionTime":"2026-01-26T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.517688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.517726 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.517737 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.517752 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.517761 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:14Z","lastTransitionTime":"2026-01-26T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.540792 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:36:21.416972013 +0000 UTC Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.560181 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:14 crc kubenswrapper[4823]: E0126 14:47:14.560290 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.560356 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.560451 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:14 crc kubenswrapper[4823]: E0126 14:47:14.560498 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:14 crc kubenswrapper[4823]: E0126 14:47:14.560575 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.619840 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.619873 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.619881 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.619895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.619907 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:14Z","lastTransitionTime":"2026-01-26T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.721995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.722033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.722043 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.722058 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.722069 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:14Z","lastTransitionTime":"2026-01-26T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.767152 4823 generic.go:334] "Generic (PLEG): container finished" podID="1364c14e-d1f9-422e-bbee-efd99f5f2271" containerID="a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e" exitCode=0 Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.767190 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" event={"ID":"1364c14e-d1f9-422e-bbee-efd99f5f2271","Type":"ContainerDied","Data":"a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e"} Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.781008 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.793789 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.805791 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.815031 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.823723 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.823767 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.823780 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.823798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.823810 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:14Z","lastTransitionTime":"2026-01-26T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.826645 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.841732 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.853061 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.863432 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.874145 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.886284 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.897758 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.916413 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.926338 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.926600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.926662 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.926724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.926786 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:14Z","lastTransitionTime":"2026-01-26T14:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.927106 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.936455 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:14 crc kubenswrapper[4823]: I0126 14:47:14.954340 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:14Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.029814 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.030235 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.030458 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.030615 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.030888 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:15Z","lastTransitionTime":"2026-01-26T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.134054 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.134109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.134123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.134145 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.134161 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:15Z","lastTransitionTime":"2026-01-26T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.236150 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.236184 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.236192 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.236205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.236214 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:15Z","lastTransitionTime":"2026-01-26T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.339517 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.339560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.339571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.339588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.339599 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:15Z","lastTransitionTime":"2026-01-26T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.442646 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.442682 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.442693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.442707 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.442717 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:15Z","lastTransitionTime":"2026-01-26T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.541777 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 02:17:03.135879807 +0000 UTC Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.545119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.545155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.545163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.545178 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.545188 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:15Z","lastTransitionTime":"2026-01-26T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.647338 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.647386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.647395 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.647409 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.647420 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:15Z","lastTransitionTime":"2026-01-26T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.749729 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.749765 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.749777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.749791 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.749803 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:15Z","lastTransitionTime":"2026-01-26T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.852215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.852244 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.852254 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.852267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.852276 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:15Z","lastTransitionTime":"2026-01-26T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.954917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.954960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.954973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.954990 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:15 crc kubenswrapper[4823]: I0126 14:47:15.955002 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:15Z","lastTransitionTime":"2026-01-26T14:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.057834 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.057861 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.057869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.057884 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.057894 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.161349 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.161746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.161760 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.161777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.161789 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.264837 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.265116 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.265131 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.265150 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.265162 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.368109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.368155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.368166 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.368183 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.368195 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.470766 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.470793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.470802 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.470815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.470823 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.541911 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 18:38:09.748477848 +0000 UTC Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.559313 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:16 crc kubenswrapper[4823]: E0126 14:47:16.559456 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.559531 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:16 crc kubenswrapper[4823]: E0126 14:47:16.559696 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.559809 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:16 crc kubenswrapper[4823]: E0126 14:47:16.559890 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.573688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.573731 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.573754 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.573775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.573788 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.676282 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.676317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.676325 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.676339 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.676349 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.777491 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.777823 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.777882 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.777893 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.777909 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.777945 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.777860 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.778085 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.785456 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" event={"ID":"1364c14e-d1f9-422e-bbee-efd99f5f2271","Type":"ContainerStarted","Data":"c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.793013 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.845647 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.848471 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.849508 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.859100 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.869833 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.880479 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.880521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.880533 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.880551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.880563 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.881236 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.898554 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.919179 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.931416 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.939973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.940008 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.940017 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.940030 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.940038 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.942183 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.952608 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: E0126 14:47:16.953009 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.957626 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.957668 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.957679 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.957696 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.957709 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.968460 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: E0126 14:47:16.969944 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.974468 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.974508 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.974519 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.974540 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.974552 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.985113 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: E0126 14:47:16.986092 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.990288 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.990317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.990326 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.990341 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:16 crc kubenswrapper[4823]: I0126 14:47:16.990353 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:16Z","lastTransitionTime":"2026-01-26T14:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.001433 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:16Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: E0126 14:47:17.003230 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.008536 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.008659 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.008737 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.008766 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.008796 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:17Z","lastTransitionTime":"2026-01-26T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.020159 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: E0126 14:47:17.024686 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: E0126 14:47:17.024841 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.026496 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.026533 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.026545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.026564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.026576 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:17Z","lastTransitionTime":"2026-01-26T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.036774 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.049557 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.062200 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.075316 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.091833 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.102417 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.111402 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.128715 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.128755 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.128766 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.128782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.128795 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:17Z","lastTransitionTime":"2026-01-26T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.136269 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.152244 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.163855 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.175643 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.187071 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.199426 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.213389 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.223241 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.231240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.231297 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.231307 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.231323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.231332 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:17Z","lastTransitionTime":"2026-01-26T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.234576 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:17Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.333822 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.333860 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.333870 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.333886 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.333899 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:17Z","lastTransitionTime":"2026-01-26T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.435767 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.435839 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.435858 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.435905 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.435925 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:17Z","lastTransitionTime":"2026-01-26T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.539038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.539076 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.539088 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.539105 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.539116 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:17Z","lastTransitionTime":"2026-01-26T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.544636 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:48:34.729412042 +0000 UTC Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.642031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.642074 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.642085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.642104 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.642115 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:17Z","lastTransitionTime":"2026-01-26T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.744296 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.744341 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.744353 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.744385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.744398 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:17Z","lastTransitionTime":"2026-01-26T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.790634 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.847504 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.847574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.847594 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.847624 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.847644 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:17Z","lastTransitionTime":"2026-01-26T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.951064 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.951139 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.951152 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.951176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:17 crc kubenswrapper[4823]: I0126 14:47:17.951191 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:17Z","lastTransitionTime":"2026-01-26T14:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.054427 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.054470 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.054480 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.054497 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.054519 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:18Z","lastTransitionTime":"2026-01-26T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.157831 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.157929 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.157956 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.157995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.158022 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:18Z","lastTransitionTime":"2026-01-26T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.260130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.260167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.260175 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.260188 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.260197 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:18Z","lastTransitionTime":"2026-01-26T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.263577 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.263743 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.263847 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:47:34.263798918 +0000 UTC m=+50.949262023 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.263909 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.263992 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:34.263970512 +0000 UTC m=+50.949433877 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.362298 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.362320 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.362329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.362374 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.362384 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:18Z","lastTransitionTime":"2026-01-26T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.364862 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.364883 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.364911 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.364989 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.365059 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:34.365039327 +0000 UTC m=+51.050502442 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.364996 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.365116 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.365127 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.365155 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:34.365146779 +0000 UTC m=+51.050609884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.365250 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.365309 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.365329 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.365455 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:47:34.365423537 +0000 UTC m=+51.050886652 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.465306 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.465373 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.465385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.465403 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.465416 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:18Z","lastTransitionTime":"2026-01-26T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.544851 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:00:49.22534696 +0000 UTC Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.560203 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.560256 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.560352 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.560444 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.560706 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:18 crc kubenswrapper[4823]: E0126 14:47:18.560828 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.567600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.567639 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.567649 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.567664 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.567674 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:18Z","lastTransitionTime":"2026-01-26T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.671110 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.671161 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.671171 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.671191 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.671207 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:18Z","lastTransitionTime":"2026-01-26T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.775088 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.775148 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.775167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.775198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.775218 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:18Z","lastTransitionTime":"2026-01-26T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.793692 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.877604 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.877654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.877663 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.877681 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.877692 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:18Z","lastTransitionTime":"2026-01-26T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.980993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.981077 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.981108 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.981142 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:18 crc kubenswrapper[4823]: I0126 14:47:18.981172 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:18Z","lastTransitionTime":"2026-01-26T14:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.084676 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.084753 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.084770 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.084799 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.084818 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:19Z","lastTransitionTime":"2026-01-26T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.188834 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.188926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.188952 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.188987 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.189013 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:19Z","lastTransitionTime":"2026-01-26T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.292701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.292769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.292794 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.292826 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.292846 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:19Z","lastTransitionTime":"2026-01-26T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.550646 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:45:24.040280643 +0000 UTC Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.553398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.553428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.553439 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.553456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.553469 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:19Z","lastTransitionTime":"2026-01-26T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.656082 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.656123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.656132 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.656146 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.656155 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:19Z","lastTransitionTime":"2026-01-26T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.759439 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.759494 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.759507 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.759529 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.759547 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:19Z","lastTransitionTime":"2026-01-26T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.799056 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/0.log" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.803044 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597" exitCode=1 Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.803095 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597"} Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.803802 4823 scope.go:117] "RemoveContainer" containerID="158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.833526 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.845987 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.858041 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.862430 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.862483 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.862498 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.862519 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.862534 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:19Z","lastTransitionTime":"2026-01-26T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.882235 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:19Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:47:18.504111 6159 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0126 14:47:18.504137 6159 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 14:47:18.504155 6159 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 14:47:18.504216 6159 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 14:47:18.504309 6159 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 14:47:18.504767 6159 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 14:47:18.504940 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 14:47:18.505036 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:18.505113 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:47:18.505259 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.895859 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.915551 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.930002 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.946125 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.961900 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.964936 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.964967 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.964977 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.964995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.965007 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:19Z","lastTransitionTime":"2026-01-26T14:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.974492 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:19 crc kubenswrapper[4823]: I0126 14:47:19.990728 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:19Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.004379 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.018427 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.032056 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.046749 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.068117 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.068175 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.068189 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.068210 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.068224 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:20Z","lastTransitionTime":"2026-01-26T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.171631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.171688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.171699 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.171724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.171741 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:20Z","lastTransitionTime":"2026-01-26T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.281544 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.281588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.281600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.281621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.281635 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:20Z","lastTransitionTime":"2026-01-26T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.326991 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46"] Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.327517 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.330118 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.330287 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.343862 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.357918 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.372891 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.380484 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f0bc2d5-070a-415d-b477-914c63ad7b57-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z5x46\" (UID: \"1f0bc2d5-070a-415d-b477-914c63ad7b57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.380537 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f0bc2d5-070a-415d-b477-914c63ad7b57-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z5x46\" (UID: \"1f0bc2d5-070a-415d-b477-914c63ad7b57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.380556 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cscwj\" (UniqueName: \"kubernetes.io/projected/1f0bc2d5-070a-415d-b477-914c63ad7b57-kube-api-access-cscwj\") pod \"ovnkube-control-plane-749d76644c-z5x46\" (UID: \"1f0bc2d5-070a-415d-b477-914c63ad7b57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.380591 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f0bc2d5-070a-415d-b477-914c63ad7b57-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z5x46\" (UID: \"1f0bc2d5-070a-415d-b477-914c63ad7b57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.384927 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.384982 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.384994 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.385017 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.385032 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:20Z","lastTransitionTime":"2026-01-26T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.391983 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:19Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:47:18.504111 6159 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0126 14:47:18.504137 6159 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 14:47:18.504155 6159 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 14:47:18.504216 6159 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 14:47:18.504309 6159 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 14:47:18.504767 6159 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 14:47:18.504940 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 14:47:18.505036 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:18.505113 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:47:18.505259 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.413812 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.427771 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.442508 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.453440 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.468910 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.481623 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f0bc2d5-070a-415d-b477-914c63ad7b57-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z5x46\" (UID: \"1f0bc2d5-070a-415d-b477-914c63ad7b57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.482082 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f0bc2d5-070a-415d-b477-914c63ad7b57-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z5x46\" (UID: \"1f0bc2d5-070a-415d-b477-914c63ad7b57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.482613 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f0bc2d5-070a-415d-b477-914c63ad7b57-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z5x46\" (UID: \"1f0bc2d5-070a-415d-b477-914c63ad7b57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.482636 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1f0bc2d5-070a-415d-b477-914c63ad7b57-env-overrides\") pod \"ovnkube-control-plane-749d76644c-z5x46\" (UID: \"1f0bc2d5-070a-415d-b477-914c63ad7b57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.482694 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cscwj\" (UniqueName: \"kubernetes.io/projected/1f0bc2d5-070a-415d-b477-914c63ad7b57-kube-api-access-cscwj\") pod \"ovnkube-control-plane-749d76644c-z5x46\" (UID: \"1f0bc2d5-070a-415d-b477-914c63ad7b57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.483107 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1f0bc2d5-070a-415d-b477-914c63ad7b57-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-z5x46\" (UID: \"1f0bc2d5-070a-415d-b477-914c63ad7b57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.487918 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1f0bc2d5-070a-415d-b477-914c63ad7b57-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-z5x46\" (UID: \"1f0bc2d5-070a-415d-b477-914c63ad7b57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.488693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.488914 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.488925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.488940 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.488952 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:20Z","lastTransitionTime":"2026-01-26T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.497344 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.512044 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cscwj\" (UniqueName: \"kubernetes.io/projected/1f0bc2d5-070a-415d-b477-914c63ad7b57-kube-api-access-cscwj\") pod \"ovnkube-control-plane-749d76644c-z5x46\" (UID: \"1f0bc2d5-070a-415d-b477-914c63ad7b57\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.513773 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.528258 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.542688 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.551029 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:44:31.916464481 +0000 UTC Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.555008 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.559926 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.559996 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.559934 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:20 crc kubenswrapper[4823]: E0126 14:47:20.560271 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:20 crc kubenswrapper[4823]: E0126 14:47:20.560605 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:20 crc kubenswrapper[4823]: E0126 14:47:20.560600 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.567111 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.592443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.592548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.592559 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.592603 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.592615 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:20Z","lastTransitionTime":"2026-01-26T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.598295 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.648591 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" Jan 26 14:47:20 crc kubenswrapper[4823]: W0126 14:47:20.676251 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f0bc2d5_070a_415d_b477_914c63ad7b57.slice/crio-4591e5eedef6baec8c6e596fb8f6e589ca54089ad08d820a717b97f211519d6b WatchSource:0}: Error finding container 4591e5eedef6baec8c6e596fb8f6e589ca54089ad08d820a717b97f211519d6b: Status 404 returned error can't find the container with id 4591e5eedef6baec8c6e596fb8f6e589ca54089ad08d820a717b97f211519d6b Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.695951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.695977 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.695986 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.696001 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.696010 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:20Z","lastTransitionTime":"2026-01-26T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.799232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.799278 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.799289 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.799313 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.799323 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:20Z","lastTransitionTime":"2026-01-26T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.808657 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" event={"ID":"1f0bc2d5-070a-415d-b477-914c63ad7b57","Type":"ContainerStarted","Data":"4591e5eedef6baec8c6e596fb8f6e589ca54089ad08d820a717b97f211519d6b"} Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.810860 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/0.log" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.814388 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48"} Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.814553 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.832250 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.850505 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.865666 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.882004 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.896505 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.902689 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.902743 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.902756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.903269 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.903316 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:20Z","lastTransitionTime":"2026-01-26T14:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.912961 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.925312 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.945635 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:19Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:47:18.504111 6159 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0126 14:47:18.504137 6159 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 14:47:18.504155 6159 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 14:47:18.504216 6159 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 14:47:18.504309 6159 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 14:47:18.504767 6159 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 14:47:18.504940 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 14:47:18.505036 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:18.505113 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:47:18.505259 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.967346 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.980627 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:20 crc kubenswrapper[4823]: I0126 14:47:20.995698 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:20Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.006812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.006869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.006884 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.006908 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.006929 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:21Z","lastTransitionTime":"2026-01-26T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.011762 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.025230 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.045639 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.059860 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.110866 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.110930 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.110946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.110969 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.110983 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:21Z","lastTransitionTime":"2026-01-26T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.213750 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.213803 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.213815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.213833 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.213845 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:21Z","lastTransitionTime":"2026-01-26T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.317155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.317213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.317228 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.317250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.317265 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:21Z","lastTransitionTime":"2026-01-26T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.419720 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.419777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.419795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.419821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.419844 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:21Z","lastTransitionTime":"2026-01-26T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.522908 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.522954 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.522963 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.522980 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.522989 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:21Z","lastTransitionTime":"2026-01-26T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.551695 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 20:22:14.073581916 +0000 UTC Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.625673 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.625727 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.625741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.625762 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.625775 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:21Z","lastTransitionTime":"2026-01-26T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.728753 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.728795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.728805 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.728824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.728837 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:21Z","lastTransitionTime":"2026-01-26T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.816214 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.831824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.831857 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.831869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.831885 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.831902 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:21Z","lastTransitionTime":"2026-01-26T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.831910 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-dh4f9"] Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.832389 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:21 crc kubenswrapper[4823]: E0126 14:47:21.832457 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.855686 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.867554 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.876515 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.896177 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:19Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:47:18.504111 6159 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0126 14:47:18.504137 6159 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 14:47:18.504155 6159 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 14:47:18.504216 6159 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 14:47:18.504309 6159 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 14:47:18.504767 6159 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 14:47:18.504940 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 14:47:18.505036 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:18.505113 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:47:18.505259 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.903126 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wzsp\" (UniqueName: \"kubernetes.io/projected/35318be8-9029-4606-8a04-feec32098d9c-kube-api-access-5wzsp\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.903178 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.915773 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.928485 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.934221 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.934254 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.934264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.934279 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.934291 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:21Z","lastTransitionTime":"2026-01-26T14:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.941179 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.951798 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.964758 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.980397 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:21 crc kubenswrapper[4823]: I0126 14:47:21.991991 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:21Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.003781 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wzsp\" (UniqueName: \"kubernetes.io/projected/35318be8-9029-4606-8a04-feec32098d9c-kube-api-access-5wzsp\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.003840 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:22 crc kubenswrapper[4823]: E0126 14:47:22.003958 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:22 crc kubenswrapper[4823]: E0126 14:47:22.004028 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs podName:35318be8-9029-4606-8a04-feec32098d9c nodeName:}" failed. No retries permitted until 2026-01-26 14:47:22.504010061 +0000 UTC m=+39.189473176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs") pod "network-metrics-daemon-dh4f9" (UID: "35318be8-9029-4606-8a04-feec32098d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.005678 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.017468 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.023067 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wzsp\" (UniqueName: \"kubernetes.io/projected/35318be8-9029-4606-8a04-feec32098d9c-kube-api-access-5wzsp\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.027016 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.036896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.036931 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.036941 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.036965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.036980 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:22Z","lastTransitionTime":"2026-01-26T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.038839 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.053953 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.066765 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.139693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.139737 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.139748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.139766 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.139779 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:22Z","lastTransitionTime":"2026-01-26T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.242232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.242295 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.242314 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.242343 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.242392 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:22Z","lastTransitionTime":"2026-01-26T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.345732 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.345809 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.345828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.345866 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.345887 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:22Z","lastTransitionTime":"2026-01-26T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.448545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.448590 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.448600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.448623 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.448635 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:22Z","lastTransitionTime":"2026-01-26T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.508459 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:22 crc kubenswrapper[4823]: E0126 14:47:22.508714 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:22 crc kubenswrapper[4823]: E0126 14:47:22.509017 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs podName:35318be8-9029-4606-8a04-feec32098d9c nodeName:}" failed. No retries permitted until 2026-01-26 14:47:23.508998054 +0000 UTC m=+40.194461159 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs") pod "network-metrics-daemon-dh4f9" (UID: "35318be8-9029-4606-8a04-feec32098d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.551821 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 01:13:12.976462587 +0000 UTC Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.553000 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.553103 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.553124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.553151 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.553182 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:22Z","lastTransitionTime":"2026-01-26T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.559730 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.559779 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:22 crc kubenswrapper[4823]: E0126 14:47:22.559943 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.560125 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:22 crc kubenswrapper[4823]: E0126 14:47:22.560167 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:22 crc kubenswrapper[4823]: E0126 14:47:22.560348 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.655940 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.655977 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.655989 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.656007 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.656018 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:22Z","lastTransitionTime":"2026-01-26T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.758118 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.758165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.758174 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.758190 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.758200 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:22Z","lastTransitionTime":"2026-01-26T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.825226 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" event={"ID":"1f0bc2d5-070a-415d-b477-914c63ad7b57","Type":"ContainerStarted","Data":"513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.825434 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" event={"ID":"1f0bc2d5-070a-415d-b477-914c63ad7b57","Type":"ContainerStarted","Data":"f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.827350 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/1.log" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.828217 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/0.log" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.831606 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48" exitCode=1 Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.831665 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.831746 4823 scope.go:117] "RemoveContainer" containerID="158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.832615 4823 scope.go:117] "RemoveContainer" containerID="a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48" Jan 26 14:47:22 crc kubenswrapper[4823]: E0126 14:47:22.832844 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.843335 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.858268 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.860177 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.860214 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.860228 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.860244 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.860253 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:22Z","lastTransitionTime":"2026-01-26T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.870836 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.883521 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.897689 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.912044 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.935249 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.948687 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.959239 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.964548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.964592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.964615 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.964634 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.964647 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:22Z","lastTransitionTime":"2026-01-26T14:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.977317 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:19Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:47:18.504111 6159 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0126 14:47:18.504137 6159 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 14:47:18.504155 6159 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 14:47:18.504216 6159 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 14:47:18.504309 6159 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 14:47:18.504767 6159 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 14:47:18.504940 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 14:47:18.505036 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:18.505113 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:47:18.505259 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:22 crc kubenswrapper[4823]: I0126 14:47:22.990967 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:22Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.004770 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.017817 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.028341 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.038950 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.053981 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.063297 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.066848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.066887 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.066899 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.066919 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.066931 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:23Z","lastTransitionTime":"2026-01-26T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.074780 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.087764 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.102625 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.123952 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:19Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:47:18.504111 6159 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0126 14:47:18.504137 6159 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 14:47:18.504155 6159 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 14:47:18.504216 6159 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 14:47:18.504309 6159 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 14:47:18.504767 6159 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 14:47:18.504940 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 14:47:18.505036 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:18.505113 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:47:18.505259 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"r removal\\\\nI0126 14:47:20.792505 6309 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 14:47:20.792510 6309 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 14:47:20.792523 6309 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 14:47:20.792527 6309 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 14:47:20.792584 6309 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 14:47:20.792568 6309 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 14:47:20.792590 6309 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 14:47:20.792599 6309 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 14:47:20.792616 6309 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 14:47:20.792646 6309 factory.go:656] Stopping watch factory\\\\nI0126 14:47:20.792664 6309 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:20.792684 6309 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 14:47:20.792693 6309 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 14:47:20.792707 6309 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.144795 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.158080 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.168401 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.169440 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.169481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.169495 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.169521 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.169540 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:23Z","lastTransitionTime":"2026-01-26T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.181113 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.191159 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.208286 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.219348 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.231093 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.246060 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.260932 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.272323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.272387 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.272401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.272424 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.272440 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:23Z","lastTransitionTime":"2026-01-26T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.275280 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.292137 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.307309 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.375179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.375220 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.375232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.375246 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.375258 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:23Z","lastTransitionTime":"2026-01-26T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.477084 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.477116 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.477124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.477138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.477146 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:23Z","lastTransitionTime":"2026-01-26T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.520028 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:23 crc kubenswrapper[4823]: E0126 14:47:23.520156 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:23 crc kubenswrapper[4823]: E0126 14:47:23.520227 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs podName:35318be8-9029-4606-8a04-feec32098d9c nodeName:}" failed. No retries permitted until 2026-01-26 14:47:25.520210093 +0000 UTC m=+42.205673198 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs") pod "network-metrics-daemon-dh4f9" (UID: "35318be8-9029-4606-8a04-feec32098d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.552588 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:15:52.228776169 +0000 UTC Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.560843 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:23 crc kubenswrapper[4823]: E0126 14:47:23.560991 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.572426 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.582823 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.582873 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.582894 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.582913 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.582927 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:23Z","lastTransitionTime":"2026-01-26T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.589784 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.601238 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.611905 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.620482 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.631418 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.651764 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.667836 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.682728 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.685406 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.685429 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.685437 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.685450 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.685460 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:23Z","lastTransitionTime":"2026-01-26T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.695886 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.711980 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.726129 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.743030 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.766469 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.777451 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.788819 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.788889 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.788906 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.788930 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.788944 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:23Z","lastTransitionTime":"2026-01-26T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.789686 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.812637 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://158962076b44e922a8d89d94f6ba83a5359198d887d105bf701c4078b34ac597\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:19Z\\\",\\\"message\\\":\\\"mns:[] Mutations:[{Column:policies Mutator:insert Value:{GoSet:[{GoUUID:a5a72d02-1a0f-4f7f-a8c5-6923a1c4274a}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f6d604c1-9711-4e25-be6c-79ec28bbad1b}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0126 14:47:18.504111 6159 address_set.go:302] New(0d39bc5c-d5b9-432c-81be-2275bce5d7aa/default-network-controller:EgressIP:node-ips:v4:default/a712973235162149816) with []\\\\nI0126 14:47:18.504137 6159 address_set.go:302] New(aa6fc2dc-fab0-4812-b9da-809058e4dcf7/default-network-controller:EgressIP:egressip-served-pods:v4:default/a8519615025667110816) with []\\\\nI0126 14:47:18.504155 6159 address_set.go:302] New(bf133528-8652-4c84-85ff-881f0afe9837/default-network-controller:EgressService:egresssvc-served-pods:v4/a13607449821398607916) with []\\\\nI0126 14:47:18.504216 6159 factory.go:1336] Added *v1.Node event handler 7\\\\nI0126 14:47:18.504309 6159 factory.go:1336] Added *v1.EgressIP event handler 8\\\\nI0126 14:47:18.504767 6159 factory.go:1336] Added *v1.EgressFirewall event handler 9\\\\nI0126 14:47:18.504940 6159 controller.go:132] Adding controller ef_node_controller event handlers\\\\nI0126 14:47:18.505036 6159 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:18.505113 6159 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:47:18.505259 6159 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"r removal\\\\nI0126 14:47:20.792505 6309 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 14:47:20.792510 6309 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 14:47:20.792523 6309 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 14:47:20.792527 6309 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 14:47:20.792584 6309 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 14:47:20.792568 6309 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 14:47:20.792590 6309 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 14:47:20.792599 6309 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 14:47:20.792616 6309 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 14:47:20.792646 6309 factory.go:656] Stopping watch factory\\\\nI0126 14:47:20.792664 6309 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:20.792684 6309 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 14:47:20.792693 6309 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 14:47:20.792707 6309 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:23Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.835531 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/1.log" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.891632 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.891681 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.891721 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.891742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.891755 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:23Z","lastTransitionTime":"2026-01-26T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.994539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.994580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.994591 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.994606 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:23 crc kubenswrapper[4823]: I0126 14:47:23.994615 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:23Z","lastTransitionTime":"2026-01-26T14:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.097256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.097320 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.097332 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.097348 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.097357 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:24Z","lastTransitionTime":"2026-01-26T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.199629 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.199673 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.199683 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.199697 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.199705 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:24Z","lastTransitionTime":"2026-01-26T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.302417 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.302456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.302467 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.302483 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.302494 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:24Z","lastTransitionTime":"2026-01-26T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.405215 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.405255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.405264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.405278 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.405288 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:24Z","lastTransitionTime":"2026-01-26T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.508046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.508445 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.508647 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.508796 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.508922 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:24Z","lastTransitionTime":"2026-01-26T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.553019 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:28:46.827181098 +0000 UTC Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.559316 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.559436 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:24 crc kubenswrapper[4823]: E0126 14:47:24.559467 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:24 crc kubenswrapper[4823]: E0126 14:47:24.559612 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.559803 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:24 crc kubenswrapper[4823]: E0126 14:47:24.560047 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.612173 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.612508 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.612654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.612798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.612938 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:24Z","lastTransitionTime":"2026-01-26T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.715890 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.715941 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.715954 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.715975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.715987 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:24Z","lastTransitionTime":"2026-01-26T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.818139 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.818179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.818187 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.818200 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.818209 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:24Z","lastTransitionTime":"2026-01-26T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.920330 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.920386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.920401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.920422 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:24 crc kubenswrapper[4823]: I0126 14:47:24.920435 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:24Z","lastTransitionTime":"2026-01-26T14:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.024033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.024078 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.024086 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.024101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.024111 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:25Z","lastTransitionTime":"2026-01-26T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.126527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.126564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.126572 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.126586 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.126595 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:25Z","lastTransitionTime":"2026-01-26T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.229261 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.229344 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.229392 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.229436 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.229455 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:25Z","lastTransitionTime":"2026-01-26T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.331665 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.331698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.331716 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.331734 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.331745 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:25Z","lastTransitionTime":"2026-01-26T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.433737 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.433821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.433844 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.433876 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.433895 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:25Z","lastTransitionTime":"2026-01-26T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.537598 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.537670 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.537688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.537710 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.537726 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:25Z","lastTransitionTime":"2026-01-26T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.543220 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:25 crc kubenswrapper[4823]: E0126 14:47:25.543467 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:25 crc kubenswrapper[4823]: E0126 14:47:25.543560 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs podName:35318be8-9029-4606-8a04-feec32098d9c nodeName:}" failed. No retries permitted until 2026-01-26 14:47:29.543539286 +0000 UTC m=+46.229002391 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs") pod "network-metrics-daemon-dh4f9" (UID: "35318be8-9029-4606-8a04-feec32098d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.554073 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:49:46.581215462 +0000 UTC Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.559485 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:25 crc kubenswrapper[4823]: E0126 14:47:25.559628 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.640670 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.640757 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.640795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.640820 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.640836 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:25Z","lastTransitionTime":"2026-01-26T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.742988 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.743029 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.743038 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.743053 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.743063 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:25Z","lastTransitionTime":"2026-01-26T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.844557 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.844607 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.844619 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.844635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.844645 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:25Z","lastTransitionTime":"2026-01-26T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.947037 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.947069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.947078 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.947091 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:25 crc kubenswrapper[4823]: I0126 14:47:25.947100 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:25Z","lastTransitionTime":"2026-01-26T14:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.049439 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.049488 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.049500 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.049517 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.049529 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:26Z","lastTransitionTime":"2026-01-26T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.151346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.151399 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.151407 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.151422 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.151434 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:26Z","lastTransitionTime":"2026-01-26T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.253465 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.253500 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.253539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.253557 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.253569 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:26Z","lastTransitionTime":"2026-01-26T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.356403 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.356665 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.356697 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.356710 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.356718 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:26Z","lastTransitionTime":"2026-01-26T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.459165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.459225 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.459250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.459282 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.459305 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:26Z","lastTransitionTime":"2026-01-26T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.554233 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:04:36.849252632 +0000 UTC Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.559548 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.559639 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:26 crc kubenswrapper[4823]: E0126 14:47:26.559662 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.559747 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:26 crc kubenswrapper[4823]: E0126 14:47:26.559931 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:26 crc kubenswrapper[4823]: E0126 14:47:26.560022 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.561606 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.561650 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.561665 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.561687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.561705 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:26Z","lastTransitionTime":"2026-01-26T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.665412 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.665468 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.665491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.665520 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.665542 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:26Z","lastTransitionTime":"2026-01-26T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.768767 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.768831 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.768850 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.768875 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.768892 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:26Z","lastTransitionTime":"2026-01-26T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.873761 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.873848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.873871 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.873901 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.873925 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:26Z","lastTransitionTime":"2026-01-26T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.976722 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.976776 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.976795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.976817 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:26 crc kubenswrapper[4823]: I0126 14:47:26.976834 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:26Z","lastTransitionTime":"2026-01-26T14:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.079198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.079242 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.079258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.079281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.079299 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.182230 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.182292 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.182317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.182346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.182401 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.284714 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.284807 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.284828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.285241 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.285299 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.385554 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.385587 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.385595 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.385610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.385620 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: E0126 14:47:27.405872 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:27Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.411191 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.411264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.411290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.411323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.411346 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: E0126 14:47:27.430867 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:27Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.435701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.435748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.435764 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.435787 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.435803 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: E0126 14:47:27.455737 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:27Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.461146 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.461209 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.461235 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.461264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.461287 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: E0126 14:47:27.476608 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:27Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.481960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.482033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.482055 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.482081 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.482102 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: E0126 14:47:27.498738 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:27Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:27 crc kubenswrapper[4823]: E0126 14:47:27.498957 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.500688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.500756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.500768 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.500781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.500805 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.555155 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:41:50.460192888 +0000 UTC Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.559659 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:27 crc kubenswrapper[4823]: E0126 14:47:27.559851 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.603969 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.604027 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.604052 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.604081 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.604102 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.706938 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.706974 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.706983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.707000 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.707013 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.809597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.809630 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.809639 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.809676 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.809687 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.912205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.912255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.912268 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.912285 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:27 crc kubenswrapper[4823]: I0126 14:47:27.912296 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:27Z","lastTransitionTime":"2026-01-26T14:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.014568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.014612 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.014621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.014635 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.014645 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:28Z","lastTransitionTime":"2026-01-26T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.117258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.117313 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.117326 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.117342 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.117356 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:28Z","lastTransitionTime":"2026-01-26T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.219525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.219553 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.219562 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.219574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.219584 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:28Z","lastTransitionTime":"2026-01-26T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.322534 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.322568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.322579 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.322594 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.322604 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:28Z","lastTransitionTime":"2026-01-26T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.425451 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.425491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.425501 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.425517 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.425526 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:28Z","lastTransitionTime":"2026-01-26T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.527471 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.527742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.527933 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.528016 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.528079 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:28Z","lastTransitionTime":"2026-01-26T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.556248 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 06:05:04.881507961 +0000 UTC Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.559584 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:28 crc kubenswrapper[4823]: E0126 14:47:28.559777 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.559655 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:28 crc kubenswrapper[4823]: E0126 14:47:28.559975 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.559584 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:28 crc kubenswrapper[4823]: E0126 14:47:28.560176 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.630885 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.630924 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.630934 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.630950 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.630965 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:28Z","lastTransitionTime":"2026-01-26T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.733345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.733576 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.733689 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.733805 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.733913 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:28Z","lastTransitionTime":"2026-01-26T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.835581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.835610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.835621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.835669 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.835682 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:28Z","lastTransitionTime":"2026-01-26T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.938849 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.938920 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.938935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.938957 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:28 crc kubenswrapper[4823]: I0126 14:47:28.938972 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:28Z","lastTransitionTime":"2026-01-26T14:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.042477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.043017 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.043163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.043406 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.043578 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:29Z","lastTransitionTime":"2026-01-26T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.147633 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.147702 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.147720 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.147745 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.147764 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:29Z","lastTransitionTime":"2026-01-26T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.251178 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.251243 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.251258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.251274 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.251287 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:29Z","lastTransitionTime":"2026-01-26T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.354986 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.355075 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.355096 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.355123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.355142 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:29Z","lastTransitionTime":"2026-01-26T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.458133 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.458205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.458226 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.458306 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.458354 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:29Z","lastTransitionTime":"2026-01-26T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.557110 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 19:41:48.279107082 +0000 UTC Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.559753 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:29 crc kubenswrapper[4823]: E0126 14:47:29.560043 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.563201 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.563257 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.563276 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.563306 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.563327 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:29Z","lastTransitionTime":"2026-01-26T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.590661 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:29 crc kubenswrapper[4823]: E0126 14:47:29.590864 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:29 crc kubenswrapper[4823]: E0126 14:47:29.590998 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs podName:35318be8-9029-4606-8a04-feec32098d9c nodeName:}" failed. No retries permitted until 2026-01-26 14:47:37.590955961 +0000 UTC m=+54.276419246 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs") pod "network-metrics-daemon-dh4f9" (UID: "35318be8-9029-4606-8a04-feec32098d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.666634 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.666686 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.666698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.666718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.666734 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:29Z","lastTransitionTime":"2026-01-26T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.769903 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.769960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.769975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.769995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.770007 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:29Z","lastTransitionTime":"2026-01-26T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.872180 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.872224 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.872233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.872248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.872257 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:29Z","lastTransitionTime":"2026-01-26T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.975688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.975753 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.975777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.975806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:29 crc kubenswrapper[4823]: I0126 14:47:29.975829 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:29Z","lastTransitionTime":"2026-01-26T14:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.078921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.078992 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.079017 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.079048 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.079066 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:30Z","lastTransitionTime":"2026-01-26T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.182346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.182460 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.182487 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.182518 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.182542 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:30Z","lastTransitionTime":"2026-01-26T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.286249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.286943 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.286976 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.287015 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.287050 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:30Z","lastTransitionTime":"2026-01-26T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.389560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.389598 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.389610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.389627 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.389641 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:30Z","lastTransitionTime":"2026-01-26T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.492004 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.492055 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.492069 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.492090 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.492103 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:30Z","lastTransitionTime":"2026-01-26T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.558092 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 10:14:21.753347286 +0000 UTC Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.559327 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.559408 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.559426 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:30 crc kubenswrapper[4823]: E0126 14:47:30.559475 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:30 crc kubenswrapper[4823]: E0126 14:47:30.559545 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:30 crc kubenswrapper[4823]: E0126 14:47:30.559669 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.594453 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.594509 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.594527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.594551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.594569 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:30Z","lastTransitionTime":"2026-01-26T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.700479 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.700572 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.700592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.700620 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.700646 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:30Z","lastTransitionTime":"2026-01-26T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.804185 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.804243 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.804258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.804281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.804296 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:30Z","lastTransitionTime":"2026-01-26T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.905917 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.905964 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.905978 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.905995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:30 crc kubenswrapper[4823]: I0126 14:47:30.906017 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:30Z","lastTransitionTime":"2026-01-26T14:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.008435 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.008478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.008493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.008514 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.008532 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:31Z","lastTransitionTime":"2026-01-26T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.111404 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.111449 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.111471 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.111501 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.111524 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:31Z","lastTransitionTime":"2026-01-26T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.214515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.214578 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.214602 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.214634 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.214660 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:31Z","lastTransitionTime":"2026-01-26T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.317854 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.317910 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.317926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.317947 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.317961 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:31Z","lastTransitionTime":"2026-01-26T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.421842 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.421906 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.421921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.421946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.421957 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:31Z","lastTransitionTime":"2026-01-26T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.525211 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.525265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.525277 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.525298 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.525310 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:31Z","lastTransitionTime":"2026-01-26T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.558620 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 13:54:02.12971233 +0000 UTC Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.560169 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:31 crc kubenswrapper[4823]: E0126 14:47:31.560491 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.628560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.628631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.628656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.628697 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.628721 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:31Z","lastTransitionTime":"2026-01-26T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.732859 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.732909 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.732927 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.732956 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.732974 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:31Z","lastTransitionTime":"2026-01-26T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.835793 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.835881 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.835901 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.835930 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.835949 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:31Z","lastTransitionTime":"2026-01-26T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.939455 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.939512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.939526 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.939545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:31 crc kubenswrapper[4823]: I0126 14:47:31.939559 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:31Z","lastTransitionTime":"2026-01-26T14:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.042735 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.042812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.042829 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.042856 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.042874 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:32Z","lastTransitionTime":"2026-01-26T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.146808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.146879 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.146898 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.146925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.146946 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:32Z","lastTransitionTime":"2026-01-26T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.250993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.251085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.251103 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.251144 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.251165 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:32Z","lastTransitionTime":"2026-01-26T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.342551 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.344120 4823 scope.go:117] "RemoveContainer" containerID="a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.358457 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.358551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.358597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.358638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.358665 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:32Z","lastTransitionTime":"2026-01-26T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.367122 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.389624 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.405116 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.420905 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.437081 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.452759 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.463343 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.463415 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.463426 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.463442 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.463453 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:32Z","lastTransitionTime":"2026-01-26T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.474580 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.488326 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.501981 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.521144 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"r removal\\\\nI0126 14:47:20.792505 6309 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 14:47:20.792510 6309 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 14:47:20.792523 6309 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 14:47:20.792527 6309 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 14:47:20.792584 6309 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 14:47:20.792568 6309 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 14:47:20.792590 6309 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 14:47:20.792599 6309 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 14:47:20.792616 6309 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 14:47:20.792646 6309 factory.go:656] Stopping watch factory\\\\nI0126 14:47:20.792664 6309 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:20.792684 6309 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 14:47:20.792693 6309 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 14:47:20.792707 6309 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.540010 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.554606 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.559394 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 17:41:18.431390344 +0000 UTC Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.559553 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.559553 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.559727 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:32 crc kubenswrapper[4823]: E0126 14:47:32.559841 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:32 crc kubenswrapper[4823]: E0126 14:47:32.560089 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:32 crc kubenswrapper[4823]: E0126 14:47:32.560167 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.567989 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.568021 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.568033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.568052 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.568064 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:32Z","lastTransitionTime":"2026-01-26T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.568238 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.583815 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.606174 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.621359 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.641348 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:32Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.674165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.674251 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.674277 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.674339 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.674395 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:32Z","lastTransitionTime":"2026-01-26T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.779109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.779176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.779210 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.779257 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.779280 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:32Z","lastTransitionTime":"2026-01-26T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.883618 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.883693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.883717 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.883746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.883767 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:32Z","lastTransitionTime":"2026-01-26T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.986519 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.986574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.986589 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.986610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:32 crc kubenswrapper[4823]: I0126 14:47:32.986626 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:32Z","lastTransitionTime":"2026-01-26T14:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.090269 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.090339 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.090352 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.090406 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.090421 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:33Z","lastTransitionTime":"2026-01-26T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.193217 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.193283 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.193309 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.193341 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.193407 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:33Z","lastTransitionTime":"2026-01-26T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.295706 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.295752 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.295763 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.295781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.295794 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:33Z","lastTransitionTime":"2026-01-26T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.398397 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.398660 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.398670 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.398687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.398696 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:33Z","lastTransitionTime":"2026-01-26T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.501534 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.501592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.501605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.501626 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.501640 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:33Z","lastTransitionTime":"2026-01-26T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.560512 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 07:10:52.233005658 +0000 UTC Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.560566 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:33 crc kubenswrapper[4823]: E0126 14:47:33.560748 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.579284 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.591448 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.601625 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.603896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.603925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.603933 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.603949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.603957 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:33Z","lastTransitionTime":"2026-01-26T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.623583 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"r removal\\\\nI0126 14:47:20.792505 6309 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 14:47:20.792510 6309 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 14:47:20.792523 6309 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 14:47:20.792527 6309 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 14:47:20.792584 6309 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 14:47:20.792568 6309 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 14:47:20.792590 6309 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 14:47:20.792599 6309 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 14:47:20.792616 6309 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 14:47:20.792646 6309 factory.go:656] Stopping watch factory\\\\nI0126 14:47:20.792664 6309 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:20.792684 6309 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 14:47:20.792693 6309 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 14:47:20.792707 6309 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.636579 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.647092 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.663943 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.676482 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.694723 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.706830 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.706864 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.706874 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.706888 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.706897 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:33Z","lastTransitionTime":"2026-01-26T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.709237 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.722667 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.736166 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.751790 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.764636 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.778825 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.832820 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.832855 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.832865 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.832885 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.832895 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:33Z","lastTransitionTime":"2026-01-26T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.837880 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.855488 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.876925 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/1.log" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.879811 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1"} Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.880539 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.896528 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.911291 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.924568 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.935041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.935078 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.935086 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.935099 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.935109 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:33Z","lastTransitionTime":"2026-01-26T14:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.936039 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.957992 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"r removal\\\\nI0126 14:47:20.792505 6309 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 14:47:20.792510 6309 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 14:47:20.792523 6309 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 14:47:20.792527 6309 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 14:47:20.792584 6309 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 14:47:20.792568 6309 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 14:47:20.792590 6309 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 14:47:20.792599 6309 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 14:47:20.792616 6309 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 14:47:20.792646 6309 factory.go:656] Stopping watch factory\\\\nI0126 14:47:20.792664 6309 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:20.792684 6309 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 14:47:20.792693 6309 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 14:47:20.792707 6309 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.979832 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:33 crc kubenswrapper[4823]: I0126 14:47:33.991291 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:33Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.004551 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.015442 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.026179 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.036662 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.036687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.036696 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.036709 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.036717 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:34Z","lastTransitionTime":"2026-01-26T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.039872 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.050729 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.063045 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.078449 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.088898 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.097998 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.109037 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.138768 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.138808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.138817 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.138833 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.138844 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:34Z","lastTransitionTime":"2026-01-26T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.241075 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.241124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.241143 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.241167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.241184 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:34Z","lastTransitionTime":"2026-01-26T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.344664 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.344744 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.344767 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.345267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.345617 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:34Z","lastTransitionTime":"2026-01-26T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.348217 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.348470 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:48:06.348429119 +0000 UTC m=+83.033892284 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.348676 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.348880 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.348981 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:48:06.348959223 +0000 UTC m=+83.034422478 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.448057 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.448092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.448100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.448115 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.448125 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:34Z","lastTransitionTime":"2026-01-26T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.449299 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.449393 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.449501 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.449531 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.449561 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.449577 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.449557 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.449761 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.449792 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.449815 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.449899 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:48:06.449570315 +0000 UTC m=+83.135033460 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.449941 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:48:06.449926874 +0000 UTC m=+83.135390019 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.449971 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:48:06.449959805 +0000 UTC m=+83.135422950 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.551082 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.551128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.551145 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.551161 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.551171 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:34Z","lastTransitionTime":"2026-01-26T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.559552 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.559698 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.560131 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.560208 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.560264 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.560313 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.561567 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:49:52.676680203 +0000 UTC Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.654024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.654329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.654438 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.654513 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.654572 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:34Z","lastTransitionTime":"2026-01-26T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.756356 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.756405 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.756414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.756428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.756439 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:34Z","lastTransitionTime":"2026-01-26T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.858588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.858629 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.858637 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.858654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.858664 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:34Z","lastTransitionTime":"2026-01-26T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.884995 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/2.log" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.886064 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/1.log" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.888552 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1" exitCode=1 Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.888600 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1"} Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.888736 4823 scope.go:117] "RemoveContainer" containerID="a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.889668 4823 scope.go:117] "RemoveContainer" containerID="e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1" Jan 26 14:47:34 crc kubenswrapper[4823]: E0126 14:47:34.891773 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.901447 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.912250 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.920090 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.930498 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.942224 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.952971 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.960563 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.960596 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.960605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.960619 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.960629 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:34Z","lastTransitionTime":"2026-01-26T14:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.969505 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.978590 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:34 crc kubenswrapper[4823]: I0126 14:47:34.988952 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:34Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.004716 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"r removal\\\\nI0126 14:47:20.792505 6309 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 14:47:20.792510 6309 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 14:47:20.792523 6309 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 14:47:20.792527 6309 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 14:47:20.792584 6309 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 14:47:20.792568 6309 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 14:47:20.792590 6309 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 14:47:20.792599 6309 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 14:47:20.792616 6309 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 14:47:20.792646 6309 factory.go:656] Stopping watch factory\\\\nI0126 14:47:20.792664 6309 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:20.792684 6309 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 14:47:20.792693 6309 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 14:47:20.792707 6309 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:34Z\\\",\\\"message\\\":\\\"emon-dh4f9\\\\nI0126 14:47:33.962180 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-dh4f9\\\\nI0126 14:47:33.962237 6507 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-dh4f9 in node crc\\\\nI0126 14:47:33.962259 6507 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962275 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962282 6507 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-d69wh in node crc\\\\nI0126 14:47:33.962287 6507 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-d69wh after 0 failed attempt(s)\\\\nI0126 14:47:33.962292 6507 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962312 6507 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:47:33.962349 6507 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-dh4f9] creating logical port openshift-multus_network-metrics-daemon-dh4f9 for pod on switch crc\\\\nF0126 14:47:33.962417 6507 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.015851 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.026521 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.035902 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.044048 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.054163 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.062481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.062512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.062520 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.062535 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.062546 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:35Z","lastTransitionTime":"2026-01-26T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.067402 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.080637 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.164605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.164654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.164670 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.164688 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.164699 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:35Z","lastTransitionTime":"2026-01-26T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.267295 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.267341 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.267353 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.267395 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.267408 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:35Z","lastTransitionTime":"2026-01-26T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.316739 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.327712 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.337059 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.347220 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.357162 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.369867 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.369913 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.369926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.369942 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.369953 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:35Z","lastTransitionTime":"2026-01-26T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.374077 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a9967ec1edd830ebea7a1df6c80f093182560b92bc8f295d1b132ee25453ee48\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"r removal\\\\nI0126 14:47:20.792505 6309 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0126 14:47:20.792510 6309 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0126 14:47:20.792523 6309 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0126 14:47:20.792527 6309 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0126 14:47:20.792553 6309 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0126 14:47:20.792584 6309 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0126 14:47:20.792568 6309 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0126 14:47:20.792590 6309 handler.go:208] Removed *v1.Node event handler 7\\\\nI0126 14:47:20.792599 6309 handler.go:208] Removed *v1.Node event handler 2\\\\nI0126 14:47:20.792616 6309 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0126 14:47:20.792646 6309 factory.go:656] Stopping watch factory\\\\nI0126 14:47:20.792664 6309 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:47:20.792684 6309 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0126 14:47:20.792693 6309 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0126 14:47:20.792707 6309 metrics.go:553] Stopping metrics server at address\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:34Z\\\",\\\"message\\\":\\\"emon-dh4f9\\\\nI0126 14:47:33.962180 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-dh4f9\\\\nI0126 14:47:33.962237 6507 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-dh4f9 in node crc\\\\nI0126 14:47:33.962259 6507 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962275 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962282 6507 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-d69wh in node crc\\\\nI0126 14:47:33.962287 6507 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-d69wh after 0 failed attempt(s)\\\\nI0126 14:47:33.962292 6507 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962312 6507 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:47:33.962349 6507 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-dh4f9] creating logical port openshift-multus_network-metrics-daemon-dh4f9 for pod on switch crc\\\\nF0126 14:47:33.962417 6507 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.384693 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.397262 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.407907 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.420615 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.431430 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.442508 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.451734 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.461972 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.472248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.472316 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.472331 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.472352 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.472387 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:35Z","lastTransitionTime":"2026-01-26T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.473191 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.481834 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.492683 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.503032 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.514839 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.560330 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:35 crc kubenswrapper[4823]: E0126 14:47:35.560531 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.562703 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:12:18.564885327 +0000 UTC Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.574585 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.574627 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.574638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.574672 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.574685 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:35Z","lastTransitionTime":"2026-01-26T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.676610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.676665 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.676675 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.676687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.676696 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:35Z","lastTransitionTime":"2026-01-26T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.778488 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.778534 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.778545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.778562 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.778575 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:35Z","lastTransitionTime":"2026-01-26T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.881135 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.881165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.881174 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.881187 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.881196 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:35Z","lastTransitionTime":"2026-01-26T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.893337 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/2.log" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.896253 4823 scope.go:117] "RemoveContainer" containerID="e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1" Jan 26 14:47:35 crc kubenswrapper[4823]: E0126 14:47:35.896375 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.913157 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.922676 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.930671 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.955818 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:34Z\\\",\\\"message\\\":\\\"emon-dh4f9\\\\nI0126 14:47:33.962180 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-dh4f9\\\\nI0126 14:47:33.962237 6507 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-dh4f9 in node crc\\\\nI0126 14:47:33.962259 6507 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962275 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962282 6507 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-d69wh in node crc\\\\nI0126 14:47:33.962287 6507 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-d69wh after 0 failed attempt(s)\\\\nI0126 14:47:33.962292 6507 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962312 6507 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:47:33.962349 6507 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-dh4f9] creating logical port openshift-multus_network-metrics-daemon-dh4f9 for pod on switch crc\\\\nF0126 14:47:33.962417 6507 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.970675 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.982051 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.983707 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.983811 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.983889 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.983967 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.984041 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:35Z","lastTransitionTime":"2026-01-26T14:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:35 crc kubenswrapper[4823]: I0126 14:47:35.993531 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:35Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.006087 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.018145 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.027327 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.037255 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.047644 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.057105 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8299cb2-bfb5-40bf-bc6f-567d0fc927e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c752c29188accd5e4152c1c3960ab7b9ca76ad3636d24fd4fdca356e6c0d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79cc4ad8ead9c66236318765f5821d5ee24b59dab1a756ef436e85ad48cae99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b8287c9ef9bff38b70708b6eda84178a20a6aee826e525cdfc3801b2f6989e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.068285 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.077094 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.087053 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.087171 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.087250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.087323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.087307 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.087395 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:36Z","lastTransitionTime":"2026-01-26T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.099235 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.110141 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:36Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.190241 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.190275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.190284 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.190298 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.190306 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:36Z","lastTransitionTime":"2026-01-26T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.292467 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.292700 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.292815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.292905 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.292996 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:36Z","lastTransitionTime":"2026-01-26T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.395500 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.395540 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.395551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.395568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.395579 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:36Z","lastTransitionTime":"2026-01-26T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.498322 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.498355 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.498414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.498429 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.498443 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:36Z","lastTransitionTime":"2026-01-26T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.559514 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:36 crc kubenswrapper[4823]: E0126 14:47:36.559822 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.560166 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:36 crc kubenswrapper[4823]: E0126 14:47:36.560279 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.560414 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:36 crc kubenswrapper[4823]: E0126 14:47:36.560543 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.563694 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 10:10:57.939392484 +0000 UTC Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.601147 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.601186 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.601195 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.601214 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.601225 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:36Z","lastTransitionTime":"2026-01-26T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.703744 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.703828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.703837 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.703855 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.703867 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:36Z","lastTransitionTime":"2026-01-26T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.805771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.805821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.805832 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.805850 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.805862 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:36Z","lastTransitionTime":"2026-01-26T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.907788 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.907824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.907834 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.907851 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:36 crc kubenswrapper[4823]: I0126 14:47:36.907861 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:36Z","lastTransitionTime":"2026-01-26T14:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.010324 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.010388 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.010397 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.010413 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.010430 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.112924 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.112965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.112976 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.112990 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.113000 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.215556 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.215601 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.215614 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.215630 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.215641 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.317393 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.317431 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.317440 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.317455 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.317465 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.419683 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.419722 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.419732 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.419746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.419755 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.522795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.522848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.522868 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.522928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.522948 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.560175 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:37 crc kubenswrapper[4823]: E0126 14:47:37.560323 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.564097 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 08:19:56.292138115 +0000 UTC Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.625594 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.625630 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.625638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.625656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.625666 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.679280 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:37 crc kubenswrapper[4823]: E0126 14:47:37.679483 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:37 crc kubenswrapper[4823]: E0126 14:47:37.679552 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs podName:35318be8-9029-4606-8a04-feec32098d9c nodeName:}" failed. No retries permitted until 2026-01-26 14:47:53.679534376 +0000 UTC m=+70.364997471 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs") pod "network-metrics-daemon-dh4f9" (UID: "35318be8-9029-4606-8a04-feec32098d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.728862 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.728907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.728919 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.728936 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.728948 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.835316 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.835590 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.835672 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.835754 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.835839 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.852499 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.852590 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.852608 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.852632 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.852649 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: E0126 14:47:37.869690 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:37Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.874871 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.875119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.875197 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.875308 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.875429 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: E0126 14:47:37.889482 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:37Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.893708 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.893746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.893756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.893774 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.893786 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: E0126 14:47:37.908023 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:37Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.913536 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.913571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.913581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.913599 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.913612 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: E0126 14:47:37.926820 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:37Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.932131 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.932217 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.932242 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.932272 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.932297 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:37 crc kubenswrapper[4823]: E0126 14:47:37.948443 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:37Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:37 crc kubenswrapper[4823]: E0126 14:47:37.948593 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.951290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.951322 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.951352 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.951395 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:37 crc kubenswrapper[4823]: I0126 14:47:37.951413 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:37Z","lastTransitionTime":"2026-01-26T14:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.054111 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.054154 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.054163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.054183 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.054193 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:38Z","lastTransitionTime":"2026-01-26T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.161017 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.161168 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.161538 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.161657 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.161739 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:38Z","lastTransitionTime":"2026-01-26T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.264824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.264950 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.264960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.264980 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.264989 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:38Z","lastTransitionTime":"2026-01-26T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.367226 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.367265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.367275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.367287 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.367295 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:38Z","lastTransitionTime":"2026-01-26T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.469247 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.469291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.469301 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.469315 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.469325 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:38Z","lastTransitionTime":"2026-01-26T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.559848 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:38 crc kubenswrapper[4823]: E0126 14:47:38.560035 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.560043 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.560105 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:38 crc kubenswrapper[4823]: E0126 14:47:38.560245 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:38 crc kubenswrapper[4823]: E0126 14:47:38.560434 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.564854 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 03:09:01.254054133 +0000 UTC Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.573094 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.573134 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.573146 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.573162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.573175 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:38Z","lastTransitionTime":"2026-01-26T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.676248 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.676286 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.676295 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.676308 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.676318 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:38Z","lastTransitionTime":"2026-01-26T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.778558 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.778596 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.778612 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.778632 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.778644 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:38Z","lastTransitionTime":"2026-01-26T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.882041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.882200 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.882229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.882263 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.882289 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:38Z","lastTransitionTime":"2026-01-26T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.986652 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.986726 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.986746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.986775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:38 crc kubenswrapper[4823]: I0126 14:47:38.986800 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:38Z","lastTransitionTime":"2026-01-26T14:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.088986 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.089022 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.089032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.089049 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.089061 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:39Z","lastTransitionTime":"2026-01-26T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.192207 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.192256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.192265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.192282 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.192292 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:39Z","lastTransitionTime":"2026-01-26T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.295286 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.295341 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.295353 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.295394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.295406 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:39Z","lastTransitionTime":"2026-01-26T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.398103 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.398148 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.398159 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.398180 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.398195 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:39Z","lastTransitionTime":"2026-01-26T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.500978 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.501012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.501020 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.501036 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.501044 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:39Z","lastTransitionTime":"2026-01-26T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.559583 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:39 crc kubenswrapper[4823]: E0126 14:47:39.559713 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.565002 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 07:48:25.584457588 +0000 UTC Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.603200 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.603239 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.603249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.603264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.603275 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:39Z","lastTransitionTime":"2026-01-26T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.705991 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.706032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.706045 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.706061 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.706073 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:39Z","lastTransitionTime":"2026-01-26T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.807790 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.808059 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.808136 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.808205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.808266 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:39Z","lastTransitionTime":"2026-01-26T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.910203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.910237 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.910251 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.910267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:39 crc kubenswrapper[4823]: I0126 14:47:39.910277 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:39Z","lastTransitionTime":"2026-01-26T14:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.011838 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.011888 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.011905 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.011929 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.011947 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:40Z","lastTransitionTime":"2026-01-26T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.114478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.114525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.114540 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.114560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.114571 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:40Z","lastTransitionTime":"2026-01-26T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.217418 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.217708 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.217776 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.217839 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.217896 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:40Z","lastTransitionTime":"2026-01-26T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.320142 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.320173 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.320182 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.320195 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.320204 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:40Z","lastTransitionTime":"2026-01-26T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.422434 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.422473 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.422485 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.422504 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.422517 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:40Z","lastTransitionTime":"2026-01-26T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.525330 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.525387 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.525398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.525410 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.525421 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:40Z","lastTransitionTime":"2026-01-26T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.559537 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:40 crc kubenswrapper[4823]: E0126 14:47:40.559655 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.559817 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:40 crc kubenswrapper[4823]: E0126 14:47:40.559924 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.560027 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:40 crc kubenswrapper[4823]: E0126 14:47:40.560123 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.566705 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 06:10:01.413650277 +0000 UTC Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.628779 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.629109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.629200 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.629265 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.629330 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:40Z","lastTransitionTime":"2026-01-26T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.731453 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.731491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.731500 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.731515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.731525 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:40Z","lastTransitionTime":"2026-01-26T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.833696 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.833728 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.833737 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.833752 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.833763 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:40Z","lastTransitionTime":"2026-01-26T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.936965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.937488 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.937582 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.937667 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:40 crc kubenswrapper[4823]: I0126 14:47:40.937750 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:40Z","lastTransitionTime":"2026-01-26T14:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.040033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.040065 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.040073 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.040089 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.040097 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:41Z","lastTransitionTime":"2026-01-26T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.142282 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.142317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.142331 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.142348 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.142383 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:41Z","lastTransitionTime":"2026-01-26T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.244592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.244623 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.244631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.244643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.244652 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:41Z","lastTransitionTime":"2026-01-26T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.346622 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.346650 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.346658 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.346687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.346696 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:41Z","lastTransitionTime":"2026-01-26T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.449343 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.449425 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.449437 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.449451 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.449460 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:41Z","lastTransitionTime":"2026-01-26T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.552525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.552869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.552982 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.553088 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.553183 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:41Z","lastTransitionTime":"2026-01-26T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.559967 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:41 crc kubenswrapper[4823]: E0126 14:47:41.560166 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.566769 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 10:54:28.782590321 +0000 UTC Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.656693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.656737 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.656747 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.656766 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.656776 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:41Z","lastTransitionTime":"2026-01-26T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.760924 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.761177 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.761302 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.761401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.761468 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:41Z","lastTransitionTime":"2026-01-26T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.866191 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.866267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.866286 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.866334 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.866354 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:41Z","lastTransitionTime":"2026-01-26T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.969662 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.969699 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.969709 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.969725 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:41 crc kubenswrapper[4823]: I0126 14:47:41.969734 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:41Z","lastTransitionTime":"2026-01-26T14:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.072971 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.073022 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.073032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.073052 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.073069 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:42Z","lastTransitionTime":"2026-01-26T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.176353 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.176413 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.176422 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.176439 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.176451 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:42Z","lastTransitionTime":"2026-01-26T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.278631 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.278671 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.278683 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.278700 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.278712 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:42Z","lastTransitionTime":"2026-01-26T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.381072 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.381149 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.381162 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.381180 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.381191 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:42Z","lastTransitionTime":"2026-01-26T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.484655 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.484737 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.484760 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.484796 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.484819 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:42Z","lastTransitionTime":"2026-01-26T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.559881 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.559881 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.559912 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:42 crc kubenswrapper[4823]: E0126 14:47:42.560626 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:42 crc kubenswrapper[4823]: E0126 14:47:42.560694 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:42 crc kubenswrapper[4823]: E0126 14:47:42.560743 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.567152 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:40:08.369043331 +0000 UTC Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.587405 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.587457 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.587470 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.587493 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.587507 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:42Z","lastTransitionTime":"2026-01-26T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.690755 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.691479 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.691757 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.691932 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.692061 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:42Z","lastTransitionTime":"2026-01-26T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.796086 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.796843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.796883 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.796913 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.796929 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:42Z","lastTransitionTime":"2026-01-26T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.900358 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.900442 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.900456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.900479 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:42 crc kubenswrapper[4823]: I0126 14:47:42.900491 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:42Z","lastTransitionTime":"2026-01-26T14:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.003402 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.003499 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.003526 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.003556 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.003579 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:43Z","lastTransitionTime":"2026-01-26T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.107055 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.107103 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.107112 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.107129 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.107141 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:43Z","lastTransitionTime":"2026-01-26T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.210699 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.210766 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.210780 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.210799 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.210810 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:43Z","lastTransitionTime":"2026-01-26T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.313895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.313959 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.313973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.314000 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.314021 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:43Z","lastTransitionTime":"2026-01-26T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.417596 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.417636 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.417644 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.417659 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.417668 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:43Z","lastTransitionTime":"2026-01-26T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.520390 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.520705 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.520719 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.520739 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.520751 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:43Z","lastTransitionTime":"2026-01-26T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.559777 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:43 crc kubenswrapper[4823]: E0126 14:47:43.559928 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.567432 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 06:37:41.588359592 +0000 UTC Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.573047 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.587513 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.602589 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8299cb2-bfb5-40bf-bc6f-567d0fc927e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c752c29188accd5e4152c1c3960ab7b9ca76ad3636d24fd4fdca356e6c0d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79cc4ad8ead9c66236318765f5821d5ee24b59dab1a756ef436e85ad48cae99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b8287c9ef9bff38b70708b6eda84178a20a6aee826e525cdfc3801b2f6989e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.618289 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.622579 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.622625 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.622638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.622655 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.622669 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:43Z","lastTransitionTime":"2026-01-26T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.630575 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.641541 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.652649 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.668524 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:34Z\\\",\\\"message\\\":\\\"emon-dh4f9\\\\nI0126 14:47:33.962180 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-dh4f9\\\\nI0126 14:47:33.962237 6507 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-dh4f9 in node crc\\\\nI0126 14:47:33.962259 6507 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962275 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962282 6507 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-d69wh in node crc\\\\nI0126 14:47:33.962287 6507 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-d69wh after 0 failed attempt(s)\\\\nI0126 14:47:33.962292 6507 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962312 6507 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:47:33.962349 6507 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-dh4f9] creating logical port openshift-multus_network-metrics-daemon-dh4f9 for pod on switch crc\\\\nF0126 14:47:33.962417 6507 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.688335 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.699548 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.708575 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.716277 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.724874 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.724912 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.724925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.724943 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.724954 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:43Z","lastTransitionTime":"2026-01-26T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.726739 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.737597 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.746137 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.755649 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.765528 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.774968 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:43Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.827255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.827323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.827335 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.827355 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.827391 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:43Z","lastTransitionTime":"2026-01-26T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.929481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.929541 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.929553 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.929571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:43 crc kubenswrapper[4823]: I0126 14:47:43.929581 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:43Z","lastTransitionTime":"2026-01-26T14:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.032147 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.032188 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.032197 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.032212 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.032221 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:44Z","lastTransitionTime":"2026-01-26T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.134571 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.134624 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.134636 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.134654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.134666 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:44Z","lastTransitionTime":"2026-01-26T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.237153 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.237190 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.237199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.237211 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.237220 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:44Z","lastTransitionTime":"2026-01-26T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.339702 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.339752 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.339764 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.339783 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.339793 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:44Z","lastTransitionTime":"2026-01-26T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.442303 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.442334 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.442341 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.442355 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.442384 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:44Z","lastTransitionTime":"2026-01-26T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.544261 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.544334 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.544346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.544387 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.544406 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:44Z","lastTransitionTime":"2026-01-26T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.559746 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.559746 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.559844 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:44 crc kubenswrapper[4823]: E0126 14:47:44.559961 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:44 crc kubenswrapper[4823]: E0126 14:47:44.560051 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:44 crc kubenswrapper[4823]: E0126 14:47:44.560112 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.567961 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 19:46:01.023012077 +0000 UTC Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.647094 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.647130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.647138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.647152 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.647161 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:44Z","lastTransitionTime":"2026-01-26T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.750170 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.750222 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.750232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.750250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.750263 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:44Z","lastTransitionTime":"2026-01-26T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.852959 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.853004 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.853016 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.853033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.853056 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:44Z","lastTransitionTime":"2026-01-26T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.955605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.955650 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.955661 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.955679 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:44 crc kubenswrapper[4823]: I0126 14:47:44.955694 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:44Z","lastTransitionTime":"2026-01-26T14:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.058804 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.058871 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.058886 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.058907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.058920 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:45Z","lastTransitionTime":"2026-01-26T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.162763 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.162806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.162816 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.162834 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.162845 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:45Z","lastTransitionTime":"2026-01-26T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.265743 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.265785 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.265795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.265808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.265818 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:45Z","lastTransitionTime":"2026-01-26T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.368839 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.368921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.368965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.369002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.369027 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:45Z","lastTransitionTime":"2026-01-26T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.472871 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.472961 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.472993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.473023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.473045 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:45Z","lastTransitionTime":"2026-01-26T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.560195 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:45 crc kubenswrapper[4823]: E0126 14:47:45.560488 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.568202 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 04:20:20.781580013 +0000 UTC Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.575087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.575140 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.575167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.575196 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.575220 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:45Z","lastTransitionTime":"2026-01-26T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.678516 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.678592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.678608 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.678691 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.678717 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:45Z","lastTransitionTime":"2026-01-26T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.780970 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.781020 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.781030 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.781050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.781066 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:45Z","lastTransitionTime":"2026-01-26T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.884423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.884498 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.884522 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.884560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.884584 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:45Z","lastTransitionTime":"2026-01-26T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.987419 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.987454 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.987462 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.987477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:45 crc kubenswrapper[4823]: I0126 14:47:45.987485 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:45Z","lastTransitionTime":"2026-01-26T14:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.091442 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.091512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.091528 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.091552 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.091566 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:46Z","lastTransitionTime":"2026-01-26T14:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.194673 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.194733 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.194748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.194769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.194783 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:46Z","lastTransitionTime":"2026-01-26T14:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.297258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.297311 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.297320 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.297340 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.297352 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:46Z","lastTransitionTime":"2026-01-26T14:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.399954 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.400013 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.400026 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.400044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.400057 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:46Z","lastTransitionTime":"2026-01-26T14:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.508925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.508988 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.509001 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.509022 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.509039 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:46Z","lastTransitionTime":"2026-01-26T14:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.559821 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:46 crc kubenswrapper[4823]: E0126 14:47:46.560192 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.560045 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:46 crc kubenswrapper[4823]: E0126 14:47:46.560410 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.560036 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:46 crc kubenswrapper[4823]: E0126 14:47:46.560604 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.569242 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 17:08:45.55760838 +0000 UTC Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.611158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.611240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.611256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.611275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.611290 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:46Z","lastTransitionTime":"2026-01-26T14:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.713828 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.713863 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.713874 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.713888 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.713897 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:46Z","lastTransitionTime":"2026-01-26T14:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.816606 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.816647 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.816658 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.816675 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.816686 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:46Z","lastTransitionTime":"2026-01-26T14:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.919630 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.919741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.919760 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.919786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:46 crc kubenswrapper[4823]: I0126 14:47:46.919803 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:46Z","lastTransitionTime":"2026-01-26T14:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.021711 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.021754 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.021768 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.021785 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.021801 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:47Z","lastTransitionTime":"2026-01-26T14:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.124766 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.124811 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.124824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.124843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.124857 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:47Z","lastTransitionTime":"2026-01-26T14:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.227661 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.227727 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.227737 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.227758 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.227772 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:47Z","lastTransitionTime":"2026-01-26T14:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.330302 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.330336 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.330346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.330386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.330396 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:47Z","lastTransitionTime":"2026-01-26T14:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.433140 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.433210 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.433219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.433239 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.433248 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:47Z","lastTransitionTime":"2026-01-26T14:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.536929 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.536981 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.536993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.537013 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.537025 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:47Z","lastTransitionTime":"2026-01-26T14:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.559801 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:47 crc kubenswrapper[4823]: E0126 14:47:47.559989 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.569719 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 01:53:21.086623295 +0000 UTC Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.640460 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.640538 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.640564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.640598 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.640626 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:47Z","lastTransitionTime":"2026-01-26T14:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.743184 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.743239 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.743250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.743273 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.743286 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:47Z","lastTransitionTime":"2026-01-26T14:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.846067 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.846112 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.846128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.846179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.846193 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:47Z","lastTransitionTime":"2026-01-26T14:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.947872 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.947900 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.947908 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.947923 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:47 crc kubenswrapper[4823]: I0126 14:47:47.947933 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:47Z","lastTransitionTime":"2026-01-26T14:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.049914 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.049958 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.049969 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.050013 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.050024 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.075860 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.075949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.075971 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.076005 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.076030 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: E0126 14:47:48.088168 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:48Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.091522 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.091619 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.091640 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.091709 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.091740 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: E0126 14:47:48.106702 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:48Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.110250 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.110282 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.110293 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.110309 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.110319 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: E0126 14:47:48.122232 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:48Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.125589 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.125621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.125630 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.125644 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.125655 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: E0126 14:47:48.147884 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:48Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.154693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.154735 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.154746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.154772 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.154784 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: E0126 14:47:48.178756 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:48Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:48 crc kubenswrapper[4823]: E0126 14:47:48.178920 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.180788 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.180819 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.180827 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.180841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.180850 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.286273 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.286314 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.286324 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.286340 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.286352 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.388728 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.388768 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.388777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.388792 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.388802 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.491165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.491198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.491206 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.491219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.491228 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.560255 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.560296 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:48 crc kubenswrapper[4823]: E0126 14:47:48.560405 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.560412 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:48 crc kubenswrapper[4823]: E0126 14:47:48.560514 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:48 crc kubenswrapper[4823]: E0126 14:47:48.560625 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.570413 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:53:30.978927584 +0000 UTC Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.593317 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.593350 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.593386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.593403 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.593417 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.696223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.696258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.696268 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.696284 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.696295 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.799130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.799177 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.799189 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.799207 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.799218 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.901451 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.901502 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.901513 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.901527 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:48 crc kubenswrapper[4823]: I0126 14:47:48.901536 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:48Z","lastTransitionTime":"2026-01-26T14:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.004863 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.004922 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.004936 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.004956 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.004970 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:49Z","lastTransitionTime":"2026-01-26T14:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.107647 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.107686 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.107695 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.107709 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.107721 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:49Z","lastTransitionTime":"2026-01-26T14:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.210172 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.210212 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.210222 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.210239 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.210251 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:49Z","lastTransitionTime":"2026-01-26T14:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.312559 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.312597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.312608 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.312622 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.312631 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:49Z","lastTransitionTime":"2026-01-26T14:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.414824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.414859 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.414868 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.414882 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.414890 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:49Z","lastTransitionTime":"2026-01-26T14:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.516544 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.516584 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.516594 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.516613 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.516626 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:49Z","lastTransitionTime":"2026-01-26T14:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.559604 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:49 crc kubenswrapper[4823]: E0126 14:47:49.559977 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.560229 4823 scope.go:117] "RemoveContainer" containerID="e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1" Jan 26 14:47:49 crc kubenswrapper[4823]: E0126 14:47:49.561666 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.571454 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 20:56:36.612840903 +0000 UTC Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.618530 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.618557 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.618567 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.618581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.618593 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:49Z","lastTransitionTime":"2026-01-26T14:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.721130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.721167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.721181 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.721200 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.721213 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:49Z","lastTransitionTime":"2026-01-26T14:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.823233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.823277 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.823289 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.823303 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.823312 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:49Z","lastTransitionTime":"2026-01-26T14:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.925990 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.926032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.926041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.926055 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:49 crc kubenswrapper[4823]: I0126 14:47:49.926066 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:49Z","lastTransitionTime":"2026-01-26T14:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.028632 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.028675 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.028690 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.028706 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.028719 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:50Z","lastTransitionTime":"2026-01-26T14:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.131057 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.131097 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.131108 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.131125 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.131135 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:50Z","lastTransitionTime":"2026-01-26T14:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.234130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.234176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.234188 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.234207 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.234219 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:50Z","lastTransitionTime":"2026-01-26T14:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.337296 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.337329 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.337338 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.337387 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.337398 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:50Z","lastTransitionTime":"2026-01-26T14:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.440220 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.440264 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.440274 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.440290 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.440300 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:50Z","lastTransitionTime":"2026-01-26T14:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.543297 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.543330 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.543340 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.543355 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.543385 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:50Z","lastTransitionTime":"2026-01-26T14:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.559535 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.559605 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.559537 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:50 crc kubenswrapper[4823]: E0126 14:47:50.559729 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:50 crc kubenswrapper[4823]: E0126 14:47:50.559621 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:50 crc kubenswrapper[4823]: E0126 14:47:50.559858 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.571591 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:48:11.894969397 +0000 UTC Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.645341 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.645401 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.645413 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.645438 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.645451 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:50Z","lastTransitionTime":"2026-01-26T14:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.747728 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.747762 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.747771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.747784 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.747793 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:50Z","lastTransitionTime":"2026-01-26T14:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.850747 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.850781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.850790 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.850807 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.850818 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:50Z","lastTransitionTime":"2026-01-26T14:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.953328 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.953388 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.953399 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.953417 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:50 crc kubenswrapper[4823]: I0126 14:47:50.953428 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:50Z","lastTransitionTime":"2026-01-26T14:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.056528 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.056856 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.056961 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.057066 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.057131 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:51Z","lastTransitionTime":"2026-01-26T14:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.159710 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.159750 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.159759 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.159778 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.159787 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:51Z","lastTransitionTime":"2026-01-26T14:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.261656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.261936 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.262008 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.262124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.262197 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:51Z","lastTransitionTime":"2026-01-26T14:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.364759 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.365066 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.365132 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.365210 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.365288 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:51Z","lastTransitionTime":"2026-01-26T14:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.467212 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.467523 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.467591 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.467669 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.467751 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:51Z","lastTransitionTime":"2026-01-26T14:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.560135 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:51 crc kubenswrapper[4823]: E0126 14:47:51.560282 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.569436 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.569472 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.569481 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.569495 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.569505 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:51Z","lastTransitionTime":"2026-01-26T14:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.572568 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 06:56:00.429326398 +0000 UTC Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.672421 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.672449 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.672458 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.672473 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.672560 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:51Z","lastTransitionTime":"2026-01-26T14:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.774697 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.774737 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.774747 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.774763 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.774775 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:51Z","lastTransitionTime":"2026-01-26T14:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.876699 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.876738 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.876748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.876765 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.876777 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:51Z","lastTransitionTime":"2026-01-26T14:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.979792 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.979853 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.979869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.979897 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:51 crc kubenswrapper[4823]: I0126 14:47:51.979912 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:51Z","lastTransitionTime":"2026-01-26T14:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.082550 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.082597 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.082609 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.082624 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.082634 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:52Z","lastTransitionTime":"2026-01-26T14:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.184959 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.185000 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.185015 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.185031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.185040 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:52Z","lastTransitionTime":"2026-01-26T14:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.287239 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.287291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.287305 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.287325 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.287336 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:52Z","lastTransitionTime":"2026-01-26T14:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.389757 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.390024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.390112 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.390203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.390287 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:52Z","lastTransitionTime":"2026-01-26T14:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.492457 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.492713 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.492789 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.492864 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.492932 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:52Z","lastTransitionTime":"2026-01-26T14:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.559235 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.559245 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:52 crc kubenswrapper[4823]: E0126 14:47:52.559733 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.559265 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:52 crc kubenswrapper[4823]: E0126 14:47:52.559753 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:52 crc kubenswrapper[4823]: E0126 14:47:52.559969 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.573405 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:05:07.94439714 +0000 UTC Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.594939 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.594974 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.594982 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.594998 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.595007 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:52Z","lastTransitionTime":"2026-01-26T14:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.697506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.697570 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.697583 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.697605 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.697618 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:52Z","lastTransitionTime":"2026-01-26T14:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.800254 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.800565 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.800649 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.800756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.800844 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:52Z","lastTransitionTime":"2026-01-26T14:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.903823 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.904244 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.904342 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.904471 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:52 crc kubenswrapper[4823]: I0126 14:47:52.904597 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:52Z","lastTransitionTime":"2026-01-26T14:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.007146 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.007199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.007210 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.007223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.007235 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:53Z","lastTransitionTime":"2026-01-26T14:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.109291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.109328 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.109337 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.109357 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.109383 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:53Z","lastTransitionTime":"2026-01-26T14:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.211638 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.211686 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.211698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.211718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.211730 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:53Z","lastTransitionTime":"2026-01-26T14:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.318774 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.318827 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.318841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.318861 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.318872 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:53Z","lastTransitionTime":"2026-01-26T14:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.421400 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.421681 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.421771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.421872 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.421948 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:53Z","lastTransitionTime":"2026-01-26T14:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.525039 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.525097 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.525110 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.525132 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.525144 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:53Z","lastTransitionTime":"2026-01-26T14:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.559668 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:53 crc kubenswrapper[4823]: E0126 14:47:53.559835 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.573910 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 02:50:56.376479264 +0000 UTC Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.579876 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.591341 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.600587 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.617553 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:34Z\\\",\\\"message\\\":\\\"emon-dh4f9\\\\nI0126 14:47:33.962180 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-dh4f9\\\\nI0126 14:47:33.962237 6507 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-dh4f9 in node crc\\\\nI0126 14:47:33.962259 6507 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962275 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962282 6507 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-d69wh in node crc\\\\nI0126 14:47:33.962287 6507 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-d69wh after 0 failed attempt(s)\\\\nI0126 14:47:33.962292 6507 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962312 6507 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:47:33.962349 6507 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-dh4f9] creating logical port openshift-multus_network-metrics-daemon-dh4f9 for pod on switch crc\\\\nF0126 14:47:33.962417 6507 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.628152 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.628203 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.628216 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.628235 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.628251 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:53Z","lastTransitionTime":"2026-01-26T14:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.631305 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.645331 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.660946 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.675930 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.692041 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.710962 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.727308 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.731144 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.731200 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.731213 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.731232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.731247 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:53Z","lastTransitionTime":"2026-01-26T14:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.742586 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.752129 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:53 crc kubenswrapper[4823]: E0126 14:47:53.752325 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:53 crc kubenswrapper[4823]: E0126 14:47:53.752435 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs podName:35318be8-9029-4606-8a04-feec32098d9c nodeName:}" failed. No retries permitted until 2026-01-26 14:48:25.752416999 +0000 UTC m=+102.437880104 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs") pod "network-metrics-daemon-dh4f9" (UID: "35318be8-9029-4606-8a04-feec32098d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.754600 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8299cb2-bfb5-40bf-bc6f-567d0fc927e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c752c29188accd5e4152c1c3960ab7b9ca76ad3636d24fd4fdca356e6c0d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79cc4ad8ead9c66236318765f5821d5ee24b59dab1a756ef436e85ad48cae99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b8287c9ef9bff38b70708b6eda84178a20a6aee826e525cdfc3801b2f6989e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.765906 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.775316 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.786631 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.802100 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.815674 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:53Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.833835 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.833868 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.833886 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.833912 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.833931 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:53Z","lastTransitionTime":"2026-01-26T14:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.936591 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.936633 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.936643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.936658 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:53 crc kubenswrapper[4823]: I0126 14:47:53.936670 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:53Z","lastTransitionTime":"2026-01-26T14:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.038998 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.039027 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.039041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.039058 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.039069 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:54Z","lastTransitionTime":"2026-01-26T14:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.146554 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.146601 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.146612 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.146628 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.146639 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:54Z","lastTransitionTime":"2026-01-26T14:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.249267 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.249308 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.249319 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.249335 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.249344 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:54Z","lastTransitionTime":"2026-01-26T14:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.351679 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.351720 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.351730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.351746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.351755 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:54Z","lastTransitionTime":"2026-01-26T14:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.453925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.453979 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.453992 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.454010 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.454023 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:54Z","lastTransitionTime":"2026-01-26T14:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.555973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.556021 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.556033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.556051 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.556063 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:54Z","lastTransitionTime":"2026-01-26T14:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.559278 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.559330 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:54 crc kubenswrapper[4823]: E0126 14:47:54.559433 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.559449 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:54 crc kubenswrapper[4823]: E0126 14:47:54.559538 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:54 crc kubenswrapper[4823]: E0126 14:47:54.559707 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.574762 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 06:03:53.044100716 +0000 UTC Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.658993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.659042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.659052 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.659068 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.659078 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:54Z","lastTransitionTime":"2026-01-26T14:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.761098 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.761132 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.761140 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.761172 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.761183 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:54Z","lastTransitionTime":"2026-01-26T14:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.863282 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.863334 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.863346 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.863387 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.863402 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:54Z","lastTransitionTime":"2026-01-26T14:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.955138 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p555f_6e7853ce-0557-452f-b7ae-cc549bf8e2ae/kube-multus/0.log" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.955185 4823 generic.go:334] "Generic (PLEG): container finished" podID="6e7853ce-0557-452f-b7ae-cc549bf8e2ae" containerID="f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba" exitCode=1 Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.955215 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p555f" event={"ID":"6e7853ce-0557-452f-b7ae-cc549bf8e2ae","Type":"ContainerDied","Data":"f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba"} Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.955626 4823 scope.go:117] "RemoveContainer" containerID="f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.965621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.965655 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.965665 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.965681 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.965690 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:54Z","lastTransitionTime":"2026-01-26T14:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.970450 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:54Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.983012 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:54Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:54 crc kubenswrapper[4823]: I0126 14:47:54.996085 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:54Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.006768 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.017428 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.036288 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.047310 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.057995 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.068094 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.068112 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.068122 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.068136 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.068145 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:55Z","lastTransitionTime":"2026-01-26T14:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.068475 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8299cb2-bfb5-40bf-bc6f-567d0fc927e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c752c29188accd5e4152c1c3960ab7b9ca76ad3636d24fd4fdca356e6c0d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79cc4ad8ead9c66236318765f5821d5ee24b59dab1a756ef436e85ad48cae99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b8287c9ef9bff38b70708b6eda84178a20a6aee826e525cdfc3801b2f6989e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.080419 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:54Z\\\",\\\"message\\\":\\\"2026-01-26T14:47:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84467746-e7b4-4e0a-abb9-7c8d60d95dab\\\\n2026-01-26T14:47:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84467746-e7b4-4e0a-abb9-7c8d60d95dab to /host/opt/cni/bin/\\\\n2026-01-26T14:47:09Z [verbose] multus-daemon started\\\\n2026-01-26T14:47:09Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:47:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.091844 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.104598 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.116470 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.134166 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.153467 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.165527 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.170163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.170195 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.170207 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.170223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.170236 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:55Z","lastTransitionTime":"2026-01-26T14:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.177098 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.196934 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:34Z\\\",\\\"message\\\":\\\"emon-dh4f9\\\\nI0126 14:47:33.962180 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-dh4f9\\\\nI0126 14:47:33.962237 6507 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-dh4f9 in node crc\\\\nI0126 14:47:33.962259 6507 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962275 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962282 6507 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-d69wh in node crc\\\\nI0126 14:47:33.962287 6507 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-d69wh after 0 failed attempt(s)\\\\nI0126 14:47:33.962292 6507 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962312 6507 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:47:33.962349 6507 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-dh4f9] creating logical port openshift-multus_network-metrics-daemon-dh4f9 for pod on switch crc\\\\nF0126 14:47:33.962417 6507 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.272867 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.272902 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.272910 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.272923 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.272933 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:55Z","lastTransitionTime":"2026-01-26T14:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.374972 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.375006 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.375016 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.375031 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.375041 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:55Z","lastTransitionTime":"2026-01-26T14:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.477760 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.477798 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.477807 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.477823 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.477832 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:55Z","lastTransitionTime":"2026-01-26T14:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.559715 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:55 crc kubenswrapper[4823]: E0126 14:47:55.559862 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.575786 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 04:52:00.521062842 +0000 UTC Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.580594 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.580663 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.580678 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.580696 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.580707 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:55Z","lastTransitionTime":"2026-01-26T14:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.683009 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.683066 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.683085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.683118 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.683135 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:55Z","lastTransitionTime":"2026-01-26T14:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.785789 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.785832 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.785853 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.785870 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.785881 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:55Z","lastTransitionTime":"2026-01-26T14:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.887919 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.887951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.887966 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.887982 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.887998 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:55Z","lastTransitionTime":"2026-01-26T14:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.959219 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p555f_6e7853ce-0557-452f-b7ae-cc549bf8e2ae/kube-multus/0.log" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.959272 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p555f" event={"ID":"6e7853ce-0557-452f-b7ae-cc549bf8e2ae","Type":"ContainerStarted","Data":"3d9997e1c384fff7560bd4f45dcbc44a289ddc562c7c9784cda8b253e6d0d060"} Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.971936 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.986452 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.990781 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.990810 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.990819 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.990842 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.990858 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:55Z","lastTransitionTime":"2026-01-26T14:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:55 crc kubenswrapper[4823]: I0126 14:47:55.999223 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8299cb2-bfb5-40bf-bc6f-567d0fc927e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c752c29188accd5e4152c1c3960ab7b9ca76ad3636d24fd4fdca356e6c0d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79cc4ad8ead9c66236318765f5821d5ee24b59dab1a756ef436e85ad48cae99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b8287c9ef9bff38b70708b6eda84178a20a6aee826e525cdfc3801b2f6989e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:55Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.013430 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9997e1c384fff7560bd4f45dcbc44a289ddc562c7c9784cda8b253e6d0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:54Z\\\",\\\"message\\\":\\\"2026-01-26T14:47:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84467746-e7b4-4e0a-abb9-7c8d60d95dab\\\\n2026-01-26T14:47:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84467746-e7b4-4e0a-abb9-7c8d60d95dab to /host/opt/cni/bin/\\\\n2026-01-26T14:47:09Z [verbose] multus-daemon started\\\\n2026-01-26T14:47:09Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:47:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.029105 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.047185 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.063137 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.086290 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:34Z\\\",\\\"message\\\":\\\"emon-dh4f9\\\\nI0126 14:47:33.962180 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-dh4f9\\\\nI0126 14:47:33.962237 6507 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-dh4f9 in node crc\\\\nI0126 14:47:33.962259 6507 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962275 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962282 6507 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-d69wh in node crc\\\\nI0126 14:47:33.962287 6507 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-d69wh after 0 failed attempt(s)\\\\nI0126 14:47:33.962292 6507 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962312 6507 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:47:33.962349 6507 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-dh4f9] creating logical port openshift-multus_network-metrics-daemon-dh4f9 for pod on switch crc\\\\nF0126 14:47:33.962417 6507 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.093656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.093695 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.093703 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.093720 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.093731 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:56Z","lastTransitionTime":"2026-01-26T14:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.106953 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.120723 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.131755 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.145048 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.158241 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.173630 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.184527 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.195314 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.195344 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.195356 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.195385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.195397 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:56Z","lastTransitionTime":"2026-01-26T14:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.197034 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.209733 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.224214 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:56Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.297862 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.297895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.297904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.297918 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.297926 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:56Z","lastTransitionTime":"2026-01-26T14:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.400003 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.400041 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.400050 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.400068 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.400080 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:56Z","lastTransitionTime":"2026-01-26T14:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.502730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.502774 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.502786 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.502805 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.502816 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:56Z","lastTransitionTime":"2026-01-26T14:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.559882 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:56 crc kubenswrapper[4823]: E0126 14:47:56.560035 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.560262 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:56 crc kubenswrapper[4823]: E0126 14:47:56.560331 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.560496 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:56 crc kubenswrapper[4823]: E0126 14:47:56.560603 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.576341 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 10:44:33.044657078 +0000 UTC Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.605354 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.605410 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.605420 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.605442 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.605455 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:56Z","lastTransitionTime":"2026-01-26T14:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.708281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.708312 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.708324 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.708341 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.708352 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:56Z","lastTransitionTime":"2026-01-26T14:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.810383 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.810409 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.810417 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.810429 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.810439 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:56Z","lastTransitionTime":"2026-01-26T14:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.914975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.915015 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.915027 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.915045 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:56 crc kubenswrapper[4823]: I0126 14:47:56.915058 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:56Z","lastTransitionTime":"2026-01-26T14:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.017749 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.017780 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.017790 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.017805 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.017817 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:57Z","lastTransitionTime":"2026-01-26T14:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.120813 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.120861 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.120870 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.120885 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.120897 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:57Z","lastTransitionTime":"2026-01-26T14:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.223206 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.223245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.223255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.223272 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.223282 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:57Z","lastTransitionTime":"2026-01-26T14:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.329734 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.329779 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.329790 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.329807 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.329815 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:57Z","lastTransitionTime":"2026-01-26T14:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.432241 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.432288 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.432300 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.432326 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.432339 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:57Z","lastTransitionTime":"2026-01-26T14:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.535516 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.535561 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.535574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.535591 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.535602 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:57Z","lastTransitionTime":"2026-01-26T14:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.562548 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:57 crc kubenswrapper[4823]: E0126 14:47:57.562695 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.577348 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 23:45:01.852318556 +0000 UTC Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.637899 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.637937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.637945 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.637958 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.637971 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:57Z","lastTransitionTime":"2026-01-26T14:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.740477 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.740512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.740523 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.740538 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.740551 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:57Z","lastTransitionTime":"2026-01-26T14:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.842582 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.842612 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.842619 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.842632 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.842643 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:57Z","lastTransitionTime":"2026-01-26T14:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.944640 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.944681 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.944691 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.944707 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:57 crc kubenswrapper[4823]: I0126 14:47:57.944717 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:57Z","lastTransitionTime":"2026-01-26T14:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.046478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.046525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.046542 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.046558 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.046569 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.150319 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.150542 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.150563 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.150653 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.150666 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.252879 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.252926 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.252938 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.252955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.252967 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.355062 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.355101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.355109 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.355123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.355134 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.457848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.457877 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.457885 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.457898 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.457907 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.487092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.487128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.487139 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.487153 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.487164 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: E0126 14:47:58.500538 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:58Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.504566 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.504594 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.504602 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.504616 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.504624 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: E0126 14:47:58.516191 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:58Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.519815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.519853 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.519867 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.519883 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.519895 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: E0126 14:47:58.531202 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:58Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.535510 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.535549 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.535559 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.535574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.535584 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: E0126 14:47:58.547810 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:58Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.551449 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.551491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.551499 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.551514 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.551523 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.560021 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.560090 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:47:58 crc kubenswrapper[4823]: E0126 14:47:58.560143 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.560029 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:47:58 crc kubenswrapper[4823]: E0126 14:47:58.560216 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:47:58 crc kubenswrapper[4823]: E0126 14:47:58.560276 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:47:58 crc kubenswrapper[4823]: E0126 14:47:58.564892 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:47:58Z is after 2025-08-24T17:21:41Z" Jan 26 14:47:58 crc kubenswrapper[4823]: E0126 14:47:58.565029 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.566690 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.566725 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.566735 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.566749 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.566757 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.578305 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:16:04.835282208 +0000 UTC Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.668489 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.668526 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.668535 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.668550 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.668560 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.770532 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.770568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.770580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.770595 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.770608 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.872718 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.872748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.872758 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.872771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.872779 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.974921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.974963 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.974975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.974993 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:58 crc kubenswrapper[4823]: I0126 14:47:58.975003 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:58Z","lastTransitionTime":"2026-01-26T14:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.078128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.078180 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.078198 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.078224 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.078235 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:59Z","lastTransitionTime":"2026-01-26T14:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.180411 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.180489 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.180498 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.180512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.180521 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:59Z","lastTransitionTime":"2026-01-26T14:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.282841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.282884 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.282896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.282913 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.282924 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:59Z","lastTransitionTime":"2026-01-26T14:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.385308 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.385347 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.385378 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.385394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.385406 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:59Z","lastTransitionTime":"2026-01-26T14:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.487549 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.487588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.487600 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.487619 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.487631 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:59Z","lastTransitionTime":"2026-01-26T14:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.559810 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:47:59 crc kubenswrapper[4823]: E0126 14:47:59.559953 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.578768 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:08:21.776250131 +0000 UTC Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.589308 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.589338 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.589347 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.589380 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.589390 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:59Z","lastTransitionTime":"2026-01-26T14:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.691196 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.691244 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.691253 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.691268 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.691277 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:59Z","lastTransitionTime":"2026-01-26T14:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.793942 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.793995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.794005 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.794024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.794036 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:59Z","lastTransitionTime":"2026-01-26T14:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.895658 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.895712 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.895729 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.895748 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.895759 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:59Z","lastTransitionTime":"2026-01-26T14:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.997824 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.997854 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.997864 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.997881 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:47:59 crc kubenswrapper[4823]: I0126 14:47:59.997892 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:47:59Z","lastTransitionTime":"2026-01-26T14:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.099888 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.099936 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.099946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.099962 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.099971 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:00Z","lastTransitionTime":"2026-01-26T14:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.201865 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.201921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.201935 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.201953 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.201986 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:00Z","lastTransitionTime":"2026-01-26T14:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.304451 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.304487 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.304495 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.304510 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.304518 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:00Z","lastTransitionTime":"2026-01-26T14:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.406782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.406813 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.406826 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.406848 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.406860 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:00Z","lastTransitionTime":"2026-01-26T14:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.509414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.509462 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.509474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.509491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.509504 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:00Z","lastTransitionTime":"2026-01-26T14:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.559952 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.560005 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.560057 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:00 crc kubenswrapper[4823]: E0126 14:48:00.560193 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:00 crc kubenswrapper[4823]: E0126 14:48:00.560454 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:00 crc kubenswrapper[4823]: E0126 14:48:00.560522 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.569915 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.579818 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 07:12:07.294165425 +0000 UTC Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.611399 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.611432 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.611444 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.611459 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.611471 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:00Z","lastTransitionTime":"2026-01-26T14:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.713669 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.713751 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.713764 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.713782 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.713793 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:00Z","lastTransitionTime":"2026-01-26T14:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.816497 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.816531 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.816539 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.816551 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.816559 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:00Z","lastTransitionTime":"2026-01-26T14:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.918579 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.918614 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.918623 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.918637 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:00 crc kubenswrapper[4823]: I0126 14:48:00.918647 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:00Z","lastTransitionTime":"2026-01-26T14:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.020962 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.021025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.021049 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.021079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.021104 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:01Z","lastTransitionTime":"2026-01-26T14:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.124010 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.124046 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.124056 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.124071 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.124082 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:01Z","lastTransitionTime":"2026-01-26T14:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.226669 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.226710 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.226724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.226740 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.226756 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:01Z","lastTransitionTime":"2026-01-26T14:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.328764 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.328795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.328803 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.328815 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.328826 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:01Z","lastTransitionTime":"2026-01-26T14:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.431165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.431208 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.431216 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.431231 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.431240 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:01Z","lastTransitionTime":"2026-01-26T14:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.533756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.533795 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.533806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.533822 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.533836 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:01Z","lastTransitionTime":"2026-01-26T14:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.560159 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:01 crc kubenswrapper[4823]: E0126 14:48:01.560287 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.580063 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 11:50:31.1064864 +0000 UTC Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.636344 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.636403 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.636415 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.636433 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.636444 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:01Z","lastTransitionTime":"2026-01-26T14:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.738995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.739029 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.739037 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.739049 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.739058 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:01Z","lastTransitionTime":"2026-01-26T14:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.842394 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.842429 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.842443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.842456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.842466 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:01Z","lastTransitionTime":"2026-01-26T14:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.944880 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.944914 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.944923 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.944937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:01 crc kubenswrapper[4823]: I0126 14:48:01.944946 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:01Z","lastTransitionTime":"2026-01-26T14:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.048517 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.048553 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.048569 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.048585 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.048595 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:02Z","lastTransitionTime":"2026-01-26T14:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.150946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.150980 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.150988 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.151006 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.151015 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:02Z","lastTransitionTime":"2026-01-26T14:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.253181 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.253219 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.253229 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.253244 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.253252 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:02Z","lastTransitionTime":"2026-01-26T14:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.355705 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.355744 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.355756 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.355774 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.355786 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:02Z","lastTransitionTime":"2026-01-26T14:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.458230 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.458261 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.458272 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.458288 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.458307 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:02Z","lastTransitionTime":"2026-01-26T14:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.559527 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.559592 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.559635 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:02 crc kubenswrapper[4823]: E0126 14:48:02.559687 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:02 crc kubenswrapper[4823]: E0126 14:48:02.559746 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:02 crc kubenswrapper[4823]: E0126 14:48:02.559851 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.561025 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.561070 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.561082 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.561100 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.561111 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:02Z","lastTransitionTime":"2026-01-26T14:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.580876 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 21:10:56.328713885 +0000 UTC Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.663822 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.663880 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.663894 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.663912 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.663924 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:02Z","lastTransitionTime":"2026-01-26T14:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.766492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.766536 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.766574 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.766594 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.766605 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:02Z","lastTransitionTime":"2026-01-26T14:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.869954 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.869995 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.870007 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.870023 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.870034 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:02Z","lastTransitionTime":"2026-01-26T14:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.972255 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.972311 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.972321 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.972336 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:02 crc kubenswrapper[4823]: I0126 14:48:02.972345 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:02Z","lastTransitionTime":"2026-01-26T14:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.075233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.075277 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.075287 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.075303 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.075335 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:03Z","lastTransitionTime":"2026-01-26T14:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.178897 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.178956 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.178967 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.178986 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.178995 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:03Z","lastTransitionTime":"2026-01-26T14:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.281143 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.281466 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.281536 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.281620 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.281690 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:03Z","lastTransitionTime":"2026-01-26T14:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.385556 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.386484 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.386693 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.389474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.390025 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:03Z","lastTransitionTime":"2026-01-26T14:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.493517 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.493568 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.493581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.493601 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.493616 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:03Z","lastTransitionTime":"2026-01-26T14:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.559500 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:03 crc kubenswrapper[4823]: E0126 14:48:03.560729 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.560817 4823 scope.go:117] "RemoveContainer" containerID="e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.581539 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:59:40.846107215 +0000 UTC Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.596967 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:34Z\\\",\\\"message\\\":\\\"emon-dh4f9\\\\nI0126 14:47:33.962180 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-dh4f9\\\\nI0126 14:47:33.962237 6507 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-dh4f9 in node crc\\\\nI0126 14:47:33.962259 6507 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962275 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962282 6507 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-d69wh in node crc\\\\nI0126 14:47:33.962287 6507 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-d69wh after 0 failed attempt(s)\\\\nI0126 14:47:33.962292 6507 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962312 6507 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:47:33.962349 6507 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-dh4f9] creating logical port openshift-multus_network-metrics-daemon-dh4f9 for pod on switch crc\\\\nF0126 14:47:33.962417 6507 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.598178 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.598496 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.598509 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.598524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.598535 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:03Z","lastTransitionTime":"2026-01-26T14:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.620513 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.635694 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.647844 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.660627 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.675972 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.692513 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.701834 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.701896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.701909 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.701931 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.701945 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:03Z","lastTransitionTime":"2026-01-26T14:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.707458 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.724432 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.742045 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.758147 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.774703 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.792687 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.804503 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.804549 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.804562 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.804578 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.804590 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:03Z","lastTransitionTime":"2026-01-26T14:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.811438 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8299cb2-bfb5-40bf-bc6f-567d0fc927e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c752c29188accd5e4152c1c3960ab7b9ca76ad3636d24fd4fdca356e6c0d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79cc4ad8ead9c66236318765f5821d5ee24b59dab1a756ef436e85ad48cae99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b8287c9ef9bff38b70708b6eda84178a20a6aee826e525cdfc3801b2f6989e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.829250 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9997e1c384fff7560bd4f45dcbc44a289ddc562c7c9784cda8b253e6d0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:54Z\\\",\\\"message\\\":\\\"2026-01-26T14:47:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84467746-e7b4-4e0a-abb9-7c8d60d95dab\\\\n2026-01-26T14:47:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84467746-e7b4-4e0a-abb9-7c8d60d95dab to /host/opt/cni/bin/\\\\n2026-01-26T14:47:09Z [verbose] multus-daemon started\\\\n2026-01-26T14:47:09Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:47:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.848883 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.864185 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45312302-ac2e-4a06-8fcf-dd4f6e0baa37\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13aa7fa5aa898d3825c5adb254ec7ce99a4f0623492d4c460a00d10323e85756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81186c6b1f3e7c7ff1176b15202f37dde0bc7de0a7c98f81b86deaf45e209823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81186c6b1f3e7c7ff1176b15202f37dde0bc7de0a7c98f81b86deaf45e209823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.883628 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.900199 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.907118 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.907189 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.907199 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.907257 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.907273 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:03Z","lastTransitionTime":"2026-01-26T14:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.981662 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/2.log" Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.984853 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd"} Jan 26 14:48:03 crc kubenswrapper[4823]: I0126 14:48:03.985377 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:03.999972 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45312302-ac2e-4a06-8fcf-dd4f6e0baa37\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13aa7fa5aa898d3825c5adb254ec7ce99a4f0623492d4c460a00d10323e85756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81186c6b1f3e7c7ff1176b15202f37dde0bc7de0a7c98f81b86deaf45e209823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81186c6b1f3e7c7ff1176b15202f37dde0bc7de0a7c98f81b86deaf45e209823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:03Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.011056 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.011144 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.011158 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.011206 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.011221 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:04Z","lastTransitionTime":"2026-01-26T14:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.017910 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.036104 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.053231 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.088516 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.105856 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.114003 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.114032 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.114040 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.114058 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.114069 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:04Z","lastTransitionTime":"2026-01-26T14:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.117839 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.149761 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:34Z\\\",\\\"message\\\":\\\"emon-dh4f9\\\\nI0126 14:47:33.962180 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-dh4f9\\\\nI0126 14:47:33.962237 6507 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-dh4f9 in node crc\\\\nI0126 14:47:33.962259 6507 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962275 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962282 6507 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-d69wh in node crc\\\\nI0126 14:47:33.962287 6507 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-d69wh after 0 failed attempt(s)\\\\nI0126 14:47:33.962292 6507 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962312 6507 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:47:33.962349 6507 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-dh4f9] creating logical port openshift-multus_network-metrics-daemon-dh4f9 for pod on switch crc\\\\nF0126 14:47:33.962417 6507 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:48:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.164678 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.178228 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.195579 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.208000 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.217639 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.217847 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.217921 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.217999 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.218102 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:04Z","lastTransitionTime":"2026-01-26T14:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.222333 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.248102 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.264775 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.276723 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.287195 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8299cb2-bfb5-40bf-bc6f-567d0fc927e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c752c29188accd5e4152c1c3960ab7b9ca76ad3636d24fd4fdca356e6c0d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79cc4ad8ead9c66236318765f5821d5ee24b59dab1a756ef436e85ad48cae99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b8287c9ef9bff38b70708b6eda84178a20a6aee826e525cdfc3801b2f6989e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.301483 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9997e1c384fff7560bd4f45dcbc44a289ddc562c7c9784cda8b253e6d0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:54Z\\\",\\\"message\\\":\\\"2026-01-26T14:47:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84467746-e7b4-4e0a-abb9-7c8d60d95dab\\\\n2026-01-26T14:47:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84467746-e7b4-4e0a-abb9-7c8d60d95dab to /host/opt/cni/bin/\\\\n2026-01-26T14:47:09Z [verbose] multus-daemon started\\\\n2026-01-26T14:47:09Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:47:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.311618 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:04Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.322155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.322608 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.322733 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.322870 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.322997 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:04Z","lastTransitionTime":"2026-01-26T14:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.425530 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.425567 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.425576 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.425591 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.425600 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:04Z","lastTransitionTime":"2026-01-26T14:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.527794 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.527919 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.527931 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.527946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.527955 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:04Z","lastTransitionTime":"2026-01-26T14:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.559962 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.559981 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.559973 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:04 crc kubenswrapper[4823]: E0126 14:48:04.560079 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:04 crc kubenswrapper[4823]: E0126 14:48:04.560131 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:04 crc kubenswrapper[4823]: E0126 14:48:04.560186 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.582311 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:37:55.093196665 +0000 UTC Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.630486 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.630787 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.630951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.631083 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.631208 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:04Z","lastTransitionTime":"2026-01-26T14:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.734304 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.734385 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.734398 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.734421 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.734435 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:04Z","lastTransitionTime":"2026-01-26T14:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.836872 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.836928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.836939 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.836955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.836968 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:04Z","lastTransitionTime":"2026-01-26T14:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.939186 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.939223 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.939235 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.939249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:04 crc kubenswrapper[4823]: I0126 14:48:04.939260 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:04Z","lastTransitionTime":"2026-01-26T14:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.041538 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.041906 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.041983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.042048 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.042118 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:05Z","lastTransitionTime":"2026-01-26T14:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.143902 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.143946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.143957 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.143974 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.143988 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:05Z","lastTransitionTime":"2026-01-26T14:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.246437 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.246475 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.246485 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.246504 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.246513 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:05Z","lastTransitionTime":"2026-01-26T14:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.348621 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.348654 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.348663 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.348675 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.348684 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:05Z","lastTransitionTime":"2026-01-26T14:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.451414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.451445 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.451454 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.451466 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.451474 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:05Z","lastTransitionTime":"2026-01-26T14:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.554525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.554580 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.554593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.554611 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.554625 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:05Z","lastTransitionTime":"2026-01-26T14:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.560059 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:05 crc kubenswrapper[4823]: E0126 14:48:05.560315 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.583564 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:38:02.69966493 +0000 UTC Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.659042 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.659433 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.659524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.659592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.659679 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:05Z","lastTransitionTime":"2026-01-26T14:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.762925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.762976 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.762991 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.763013 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.763028 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:05Z","lastTransitionTime":"2026-01-26T14:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.865760 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.865801 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.865812 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.865869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.865880 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:05Z","lastTransitionTime":"2026-01-26T14:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.969299 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.969345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.969356 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.969392 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.969403 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:05Z","lastTransitionTime":"2026-01-26T14:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.994314 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/3.log" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.995276 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/2.log" Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.998955 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd" exitCode=1 Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.998999 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd"} Jan 26 14:48:05 crc kubenswrapper[4823]: I0126 14:48:05.999051 4823 scope.go:117] "RemoveContainer" containerID="e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.000827 4823 scope.go:117] "RemoveContainer" containerID="f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd" Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.001183 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.018133 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45312302-ac2e-4a06-8fcf-dd4f6e0baa37\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13aa7fa5aa898d3825c5adb254ec7ce99a4f0623492d4c460a00d10323e85756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81186c6b1f3e7c7ff1176b15202f37dde0bc7de0a7c98f81b86deaf45e209823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81186c6b1f3e7c7ff1176b15202f37dde0bc7de0a7c98f81b86deaf45e209823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.029532 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.041385 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.052901 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.070175 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.072891 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.072937 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.072946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.072960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.072969 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:06Z","lastTransitionTime":"2026-01-26T14:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.081564 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.091191 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.111290 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:34Z\\\",\\\"message\\\":\\\"emon-dh4f9\\\\nI0126 14:47:33.962180 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-dh4f9\\\\nI0126 14:47:33.962237 6507 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-dh4f9 in node crc\\\\nI0126 14:47:33.962259 6507 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962275 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962282 6507 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-d69wh in node crc\\\\nI0126 14:47:33.962287 6507 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-d69wh after 0 failed attempt(s)\\\\nI0126 14:47:33.962292 6507 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962312 6507 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:47:33.962349 6507 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-dh4f9] creating logical port openshift-multus_network-metrics-daemon-dh4f9 for pod on switch crc\\\\nF0126 14:47:33.962417 6507 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:48:05Z\\\",\\\"message\\\":\\\"lector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 14:48:04.711656 6932 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 14:48:04.712135 6932 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:48:04.712182 6932 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:48:04.712297 6932 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:48:04.712324 6932 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:48:04.713112 6932 factory.go:656] Stopping watch factory\\\\nI0126 14:48:04.776941 6932 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 14:48:04.776995 6932 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 14:48:04.777067 6932 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:48:04.777105 6932 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:48:04.777248 6932 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:48:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.121933 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.135175 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.146597 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.158241 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.175321 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.175356 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.175407 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.175423 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.175432 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:06Z","lastTransitionTime":"2026-01-26T14:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.193620 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.204805 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.214139 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.229792 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.243616 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8299cb2-bfb5-40bf-bc6f-567d0fc927e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c752c29188accd5e4152c1c3960ab7b9ca76ad3636d24fd4fdca356e6c0d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79cc4ad8ead9c66236318765f5821d5ee24b59dab1a756ef436e85ad48cae99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b8287c9ef9bff38b70708b6eda84178a20a6aee826e525cdfc3801b2f6989e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.259832 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9997e1c384fff7560bd4f45dcbc44a289ddc562c7c9784cda8b253e6d0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:54Z\\\",\\\"message\\\":\\\"2026-01-26T14:47:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84467746-e7b4-4e0a-abb9-7c8d60d95dab\\\\n2026-01-26T14:47:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84467746-e7b4-4e0a-abb9-7c8d60d95dab to /host/opt/cni/bin/\\\\n2026-01-26T14:47:09Z [verbose] multus-daemon started\\\\n2026-01-26T14:47:09Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:47:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.271344 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:06Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.278075 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.278117 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.278129 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.278147 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.278160 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:06Z","lastTransitionTime":"2026-01-26T14:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.373114 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.373296 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:10.373272608 +0000 UTC m=+147.058735713 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.373334 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.373467 4823 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.373517 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:49:10.373505114 +0000 UTC m=+147.058968219 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.380741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.380780 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.380792 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.380809 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.380821 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:06Z","lastTransitionTime":"2026-01-26T14:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.473952 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.474000 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.474040 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.474154 4823 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.474242 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-26 14:49:10.474220492 +0000 UTC m=+147.159683647 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.474168 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.474346 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.474387 4823 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.474461 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-26 14:49:10.474437528 +0000 UTC m=+147.159900683 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.474494 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.474542 4823 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.474555 4823 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.474596 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-26 14:49:10.474585811 +0000 UTC m=+147.160048906 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.483283 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.483314 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.483323 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.483337 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.483346 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:06Z","lastTransitionTime":"2026-01-26T14:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.560043 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.560134 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.560191 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.560211 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.560291 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:06 crc kubenswrapper[4823]: E0126 14:48:06.560480 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.583881 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:08:39.894839348 +0000 UTC Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.586035 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.586070 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.586079 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.586092 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.586100 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:06Z","lastTransitionTime":"2026-01-26T14:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.689548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.689615 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.689634 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.689657 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.689677 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:06Z","lastTransitionTime":"2026-01-26T14:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.794319 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.794414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.794437 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.794469 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.794489 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:06Z","lastTransitionTime":"2026-01-26T14:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.896771 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.896843 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.896873 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.896919 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.896951 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:06Z","lastTransitionTime":"2026-01-26T14:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.999459 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.999509 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.999522 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.999538 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:06 crc kubenswrapper[4823]: I0126 14:48:06.999553 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:06Z","lastTransitionTime":"2026-01-26T14:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.101703 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.101784 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.101809 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.101840 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.101868 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:07Z","lastTransitionTime":"2026-01-26T14:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.205806 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.205836 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.205852 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.205907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.205921 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:07Z","lastTransitionTime":"2026-01-26T14:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.308823 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.308874 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.308887 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.308905 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.308919 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:07Z","lastTransitionTime":"2026-01-26T14:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.411159 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.411204 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.411214 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.411230 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.411238 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:07Z","lastTransitionTime":"2026-01-26T14:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.513090 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.513147 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.513163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.513183 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.513196 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:07Z","lastTransitionTime":"2026-01-26T14:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.559559 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:07 crc kubenswrapper[4823]: E0126 14:48:07.559804 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.584484 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:37:58.646299828 +0000 UTC Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.615745 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.615818 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.615835 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.615862 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.615880 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:07Z","lastTransitionTime":"2026-01-26T14:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.718195 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.718233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.718242 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.718256 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.718265 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:07Z","lastTransitionTime":"2026-01-26T14:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.821246 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.821294 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.821303 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.821319 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.821337 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:07Z","lastTransitionTime":"2026-01-26T14:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.923607 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.923642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.923652 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.923667 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:07 crc kubenswrapper[4823]: I0126 14:48:07.923678 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:07Z","lastTransitionTime":"2026-01-26T14:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.025705 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.025753 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.025769 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.025791 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.025802 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.129134 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.129176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.129185 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.129201 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.129213 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.231643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.231679 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.231690 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.231708 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.231720 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.334399 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.334442 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.334450 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.334467 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.334476 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.438196 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.438253 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.438266 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.438288 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.438302 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.541292 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.541342 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.541353 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.541386 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.541396 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.559574 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.559614 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.559690 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:08 crc kubenswrapper[4823]: E0126 14:48:08.559814 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:08 crc kubenswrapper[4823]: E0126 14:48:08.560136 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:08 crc kubenswrapper[4823]: E0126 14:48:08.560213 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.585018 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 19:36:51.863685561 +0000 UTC Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.643491 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.643536 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.643547 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.643564 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.643577 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.745813 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.745858 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.745869 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.745882 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.745893 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.848218 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.848249 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.848257 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.848271 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.848279 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.895678 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.896598 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.896661 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.896698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.896728 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: E0126 14:48:08.910955 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.915595 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.915641 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.915650 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.915672 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.915683 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: E0126 14:48:08.926742 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.930447 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.930506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.930518 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.930537 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.930548 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: E0126 14:48:08.943735 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.947588 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.947673 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.947698 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.947728 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.947747 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: E0126 14:48:08.959824 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.964262 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.964295 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.964304 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.964320 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.964330 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:08 crc kubenswrapper[4823]: E0126 14:48:08.976394 4823 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-26T14:48:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ceea7b9-d10c-45de-8939-0873f2d979e6\\\",\\\"systemUUID\\\":\\\"06121041-f4b9-4887-a160-aaea37857ce6\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:08Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:08 crc kubenswrapper[4823]: E0126 14:48:08.976514 4823 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.978070 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.978101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.978110 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.978124 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:08 crc kubenswrapper[4823]: I0126 14:48:08.978133 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:08Z","lastTransitionTime":"2026-01-26T14:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.012008 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/3.log" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.081095 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.081134 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.081147 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.081167 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.081179 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:09Z","lastTransitionTime":"2026-01-26T14:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.183721 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.183800 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.183811 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.183825 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.183834 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:09Z","lastTransitionTime":"2026-01-26T14:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.287813 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.287857 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.287872 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.287889 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.287898 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:09Z","lastTransitionTime":"2026-01-26T14:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.390408 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.390445 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.390458 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.390474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.390486 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:09Z","lastTransitionTime":"2026-01-26T14:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.493418 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.493471 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.493486 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.493507 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.493521 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:09Z","lastTransitionTime":"2026-01-26T14:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.559528 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:09 crc kubenswrapper[4823]: E0126 14:48:09.559719 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.585296 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 23:39:04.733261164 +0000 UTC Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.597101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.597138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.597146 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.597163 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.597173 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:09Z","lastTransitionTime":"2026-01-26T14:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.700807 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.700867 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.700877 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.700909 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.700923 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:09Z","lastTransitionTime":"2026-01-26T14:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.804657 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.804701 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.804712 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.804730 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.804742 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:09Z","lastTransitionTime":"2026-01-26T14:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.907035 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.907925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.907957 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.907975 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:09 crc kubenswrapper[4823]: I0126 14:48:09.907986 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:09Z","lastTransitionTime":"2026-01-26T14:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.015126 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.015510 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.015529 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.015963 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.016054 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:10Z","lastTransitionTime":"2026-01-26T14:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.119895 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.119954 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.119964 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.119981 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.119992 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:10Z","lastTransitionTime":"2026-01-26T14:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.223431 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.223478 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.223496 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.223512 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.223522 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:10Z","lastTransitionTime":"2026-01-26T14:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.326433 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.326476 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.326489 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.326518 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.326534 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:10Z","lastTransitionTime":"2026-01-26T14:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.430860 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.430929 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.430940 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.430960 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.430974 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:10Z","lastTransitionTime":"2026-01-26T14:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.534214 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.534270 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.534286 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.534307 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.534318 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:10Z","lastTransitionTime":"2026-01-26T14:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.560026 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.560261 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.560570 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:10 crc kubenswrapper[4823]: E0126 14:48:10.560756 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:10 crc kubenswrapper[4823]: E0126 14:48:10.560960 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:10 crc kubenswrapper[4823]: E0126 14:48:10.561115 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.586471 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 13:54:01.947744016 +0000 UTC Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.636617 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.636692 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.636704 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.636724 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.636736 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:10Z","lastTransitionTime":"2026-01-26T14:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.740276 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.740318 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.740327 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.740345 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.740389 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:10Z","lastTransitionTime":"2026-01-26T14:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.843519 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.843581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.843592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.843610 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.843625 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:10Z","lastTransitionTime":"2026-01-26T14:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.945892 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.945931 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.945940 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.945954 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:10 crc kubenswrapper[4823]: I0126 14:48:10.945963 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:10Z","lastTransitionTime":"2026-01-26T14:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.048536 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.048569 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.048577 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.048592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.048600 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:11Z","lastTransitionTime":"2026-01-26T14:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.151037 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.151090 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.151101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.151117 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.151130 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:11Z","lastTransitionTime":"2026-01-26T14:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.253505 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.253555 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.253565 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.253582 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.253593 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:11Z","lastTransitionTime":"2026-01-26T14:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.355927 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.355951 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.355959 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.355973 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.355984 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:11Z","lastTransitionTime":"2026-01-26T14:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.457938 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.457980 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.457988 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.458003 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.458012 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:11Z","lastTransitionTime":"2026-01-26T14:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.559521 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:11 crc kubenswrapper[4823]: E0126 14:48:11.559664 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.561064 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.561119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.561128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.561140 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.561150 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:11Z","lastTransitionTime":"2026-01-26T14:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.587273 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:34:22.901155468 +0000 UTC Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.663719 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.663775 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.663788 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.663808 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.663825 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:11Z","lastTransitionTime":"2026-01-26T14:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.766154 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.766207 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.766227 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.766253 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.766271 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:11Z","lastTransitionTime":"2026-01-26T14:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.869857 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.869939 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.869962 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.869999 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.870023 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:11Z","lastTransitionTime":"2026-01-26T14:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.974184 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.974230 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.974240 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.974258 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:11 crc kubenswrapper[4823]: I0126 14:48:11.974271 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:11Z","lastTransitionTime":"2026-01-26T14:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.077275 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.077445 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.077534 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.077634 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.077741 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:12Z","lastTransitionTime":"2026-01-26T14:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.180505 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.180548 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.180560 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.180576 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.180587 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:12Z","lastTransitionTime":"2026-01-26T14:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.283077 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.283583 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.283592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.283609 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.283618 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:12Z","lastTransitionTime":"2026-01-26T14:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.386022 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.386074 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.386085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.386104 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.386122 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:12Z","lastTransitionTime":"2026-01-26T14:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.489030 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.489065 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.489074 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.489091 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.489100 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:12Z","lastTransitionTime":"2026-01-26T14:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.560086 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:12 crc kubenswrapper[4823]: E0126 14:48:12.560202 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.560086 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:12 crc kubenswrapper[4823]: E0126 14:48:12.560318 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.560104 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:12 crc kubenswrapper[4823]: E0126 14:48:12.560484 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.587625 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 13:51:06.027700057 +0000 UTC Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.591949 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.591983 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.591996 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.592012 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.592024 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:12Z","lastTransitionTime":"2026-01-26T14:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.694232 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.694281 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.694291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.694306 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.694317 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:12Z","lastTransitionTime":"2026-01-26T14:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.797552 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.797620 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.797641 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.797668 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.797683 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:12Z","lastTransitionTime":"2026-01-26T14:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.901205 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.901277 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.901291 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.901313 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:12 crc kubenswrapper[4823]: I0126 14:48:12.901329 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:12Z","lastTransitionTime":"2026-01-26T14:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.004787 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.004827 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.004840 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.004858 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.004870 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:13Z","lastTransitionTime":"2026-01-26T14:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.107395 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.107459 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.107480 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.107509 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.107529 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:13Z","lastTransitionTime":"2026-01-26T14:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.210179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.210217 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.210227 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.210244 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.210255 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:13Z","lastTransitionTime":"2026-01-26T14:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.313086 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.313130 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.313139 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.313153 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.313163 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:13Z","lastTransitionTime":"2026-01-26T14:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.415584 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.415629 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.415642 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.415658 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.415670 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:13Z","lastTransitionTime":"2026-01-26T14:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.518313 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.518388 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.518402 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.518419 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.518432 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:13Z","lastTransitionTime":"2026-01-26T14:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.560302 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:13 crc kubenswrapper[4823]: E0126 14:48:13.560560 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.572636 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p555f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e7853ce-0557-452f-b7ae-cc549bf8e2ae\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3d9997e1c384fff7560bd4f45dcbc44a289ddc562c7c9784cda8b253e6d0d060\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:54Z\\\",\\\"message\\\":\\\"2026-01-26T14:47:09+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_84467746-e7b4-4e0a-abb9-7c8d60d95dab\\\\n2026-01-26T14:47:09+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_84467746-e7b4-4e0a-abb9-7c8d60d95dab to /host/opt/cni/bin/\\\\n2026-01-26T14:47:09Z [verbose] multus-daemon started\\\\n2026-01-26T14:47:09Z [verbose] Readiness Indicator file check\\\\n2026-01-26T14:47:54Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9z4t6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p555f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.582005 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"35318be8-9029-4606-8a04-feec32098d9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:21Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wzsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:21Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-dh4f9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.587864 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 18:56:49.154725859 +0000 UTC Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.592210 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"426c48d3-6d8c-4612-a6ed-ac8a62472eb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8736f6a4c96dd9f16be4f6535d48c2c257b7d1b523b879534f521ff8336d2c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://afc85e776a774e7e06bb4691674c9dbbc413a5ff5fd7feb814010bbcd2dc82ae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d3aed231a967c0239bc2eb244edd8957fb584e4ab96350df0b989e37c7d4e5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.602526 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8299cb2-bfb5-40bf-bc6f-567d0fc927e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c752c29188accd5e4152c1c3960ab7b9ca76ad3636d24fd4fdca356e6c0d4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e79cc4ad8ead9c66236318765f5821d5ee24b59dab1a756ef436e85ad48cae99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33b8287c9ef9bff38b70708b6eda84178a20a6aee826e525cdfc3801b2f6989e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58702b7a1a75327927d30d146cf68583d7d966f2faf0e2c9051e671d30014d00\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.615019 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c6b8f46bf8847c0d9b5fcb59c0dcbcf97e2b7089abe1a94d3c5b698646db8c2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.620708 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.620749 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.620759 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.620774 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.620785 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:13Z","lastTransitionTime":"2026-01-26T14:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.625635 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffcc35e7e1cba32adc6eaf8b349811377be5dc8f8b05b6a6a45ec5b211f0f2ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1568eb8e58febf920deb40d703e5744de9b59b9715574b228e8491deda3338fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.638193 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45312302-ac2e-4a06-8fcf-dd4f6e0baa37\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13aa7fa5aa898d3825c5adb254ec7ce99a4f0623492d4c460a00d10323e85756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://81186c6b1f3e7c7ff1176b15202f37dde0bc7de0a7c98f81b86deaf45e209823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://81186c6b1f3e7c7ff1176b15202f37dde0bc7de0a7c98f81b86deaf45e209823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.653195 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.665045 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-bfxnx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ec2a580e-bcb0-478f-9230-c8d40b4748d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://220009dd9edf865533e5fc1f4dd16429290caad8245e94cb5b7b79f57de9c19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w8dz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bfxnx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.685701 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"232a66a2-55bb-44f6-81a0-383432fbf1d5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3a18000c9a51142a23271e597d9e97fcfcf468fd7f9cbe8f28d52af712b22b1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:47:34Z\\\",\\\"message\\\":\\\"emon-dh4f9\\\\nI0126 14:47:33.962180 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/network-metrics-daemon-dh4f9\\\\nI0126 14:47:33.962237 6507 ovn.go:134] Ensuring zone local for Pod openshift-multus/network-metrics-daemon-dh4f9 in node crc\\\\nI0126 14:47:33.962259 6507 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962275 6507 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962282 6507 ovn.go:134] Ensuring zone local for Pod openshift-image-registry/node-ca-d69wh in node crc\\\\nI0126 14:47:33.962287 6507 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-d69wh after 0 failed attempt(s)\\\\nI0126 14:47:33.962292 6507 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-d69wh\\\\nI0126 14:47:33.962312 6507 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0126 14:47:33.962349 6507 base_network_controller_pods.go:477] [default/openshift-multus/network-metrics-daemon-dh4f9] creating logical port openshift-multus_network-metrics-daemon-dh4f9 for pod on switch crc\\\\nF0126 14:47:33.962417 6507 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-26T14:48:05Z\\\",\\\"message\\\":\\\"lector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 14:48:04.711656 6932 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0126 14:48:04.712135 6932 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:48:04.712182 6932 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:48:04.712297 6932 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:48:04.712324 6932 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0126 14:48:04.713112 6932 factory.go:656] Stopping watch factory\\\\nI0126 14:48:04.776941 6932 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0126 14:48:04.776995 6932 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0126 14:48:04.777067 6932 ovnkube.go:599] Stopped ovnkube\\\\nI0126 14:48:04.777105 6932 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0126 14:48:04.777248 6932 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-26T14:48:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nf2sr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-kpz7g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.706661 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b73cc6c2-b51c-4611-a6ba-6df548e10912\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e35ec1d885ffc5cd3f9edea76d0f3e812e82034979a451542032a3e5e4f4679f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d7a896b6ad841eb25630e3a412cd7d72736707b3664f3da81444ed8400365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://de9aca2cde911cb2ea1f3b73424b8a6d46435b1f8c8cf5d7429afe9383f50bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d8fdcfe4b0bc8ca4d8834c2b12bf9106d44fa613aed9a081f1439ec2b5e4a94f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ec23886118edc70ad516bbd7cc2068381197b33e7af94fa28376aaf6fb77733\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://581d1f7b67bbbcc993cb9470cd6d64e23d80b906be3cf4c353635ebfa20654bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e5a9dcbb20e778e5b72439034cb4439bd6dc758fdeabb1e9299b271eb2f8718\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cd253ce22e0d06c37a4d318c2abda3c96ffb92db87837280c6833e213aabff79\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.721458 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://599b1867e9cd63cd38385d8f10e7ffdd57305237097369f97e64577b731753f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.723101 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.723142 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.723155 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.723176 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.723189 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:13Z","lastTransitionTime":"2026-01-26T14:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.735417 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.747096 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-d69wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a32f9039-ae4f-4825-b1d4-3a1349d56d7f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec42a2f763aceb9eb79369035273bf504b3e567c8f89fa423b278e97e9101aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qd95n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:07Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-d69wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.758760 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ffa69094ca0faaf2fe26294a334eb9ed6c80fca6d272a7b51012f2d899073bd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7d5zm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kv6z2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.772983 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1364c14e-d1f9-422e-bbee-efd99f5f2271\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c1a95e23f17b4c2e9c2a961ceab8d33475610f36b78ed2e376ef81eee1b2b121\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fea3268dd1f5da7c381f52e820bbc2e50408ab094be0521e9af27aa2a1a532f6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9a10d8176e92f55520e5030a4f71b426edc780adee15b54a52d449a2a9c46973\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://69df8e731503898b7ad4e73d660ae364ca4d51d898a806844b7419df5186c010\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d84752537c0df69bc36c8ca751df68edfba6d757c100e9feb644e2734562b6b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2009d97175b3d4d8c05fc447c68d0e09f68422e0a48e3589cab602770d02bf5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a661766c0ce6ec8c76668c612a8d6c8a82e8c41985032afac94791c907f3482e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:47:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:47:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2488b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:08Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zlr4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.783692 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1f0bc2d5-070a-415d-b477-914c63ad7b57\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9cbdad937d70addccea985edc892ab8eb7972955d3549b094fc6c5f78abfde8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://513a9486dfe6f615dff9dcf1dee3b446a24829b97eefc90c886466b78d90f0b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:47:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cscwj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:47:20Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-z5x46\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.796986 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef64e7a1-3b41-43fe-90ef-603abc3e6b63\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-26T14:46:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-26T14:46:46Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-26T14:46:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-26T14:46:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-26T14:46:43Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.812421 4823 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-26T14:47:02Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-26T14:48:13Z is after 2025-08-24T17:21:41Z" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.825659 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.825712 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.825722 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.825740 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.825751 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:13Z","lastTransitionTime":"2026-01-26T14:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.930876 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.930964 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.930980 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.931010 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:13 crc kubenswrapper[4823]: I0126 14:48:13.931031 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:13Z","lastTransitionTime":"2026-01-26T14:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.032965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.033019 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.033030 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.033045 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.033055 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:14Z","lastTransitionTime":"2026-01-26T14:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.135350 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.135428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.135440 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.135454 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.135466 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:14Z","lastTransitionTime":"2026-01-26T14:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.237656 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.237710 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.237721 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.237736 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.237747 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:14Z","lastTransitionTime":"2026-01-26T14:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.340063 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.340128 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.340138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.340157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.340167 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:14Z","lastTransitionTime":"2026-01-26T14:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.443151 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.443202 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.443214 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.443233 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.443245 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:14Z","lastTransitionTime":"2026-01-26T14:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.545653 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.545706 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.545717 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.545735 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.545747 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:14Z","lastTransitionTime":"2026-01-26T14:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.559960 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.560021 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:14 crc kubenswrapper[4823]: E0126 14:48:14.560109 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.560144 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:14 crc kubenswrapper[4823]: E0126 14:48:14.560263 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:14 crc kubenswrapper[4823]: E0126 14:48:14.560341 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.588623 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 09:40:12.238260119 +0000 UTC Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.648641 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.648968 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.649044 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.649119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.649185 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:14Z","lastTransitionTime":"2026-01-26T14:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.751740 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.751791 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.751803 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.751820 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.751832 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:14Z","lastTransitionTime":"2026-01-26T14:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.853944 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.854071 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.854085 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.854099 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.854109 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:14Z","lastTransitionTime":"2026-01-26T14:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.956942 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.956979 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.956987 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.957001 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:14 crc kubenswrapper[4823]: I0126 14:48:14.957010 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:14Z","lastTransitionTime":"2026-01-26T14:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.058727 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.058766 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.058777 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.058792 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.058803 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:15Z","lastTransitionTime":"2026-01-26T14:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.161355 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.161680 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.161755 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.161837 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.161910 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:15Z","lastTransitionTime":"2026-01-26T14:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.263946 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.264002 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.264011 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.264024 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.264033 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:15Z","lastTransitionTime":"2026-01-26T14:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.366630 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.366657 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.366666 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.366678 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.366688 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:15Z","lastTransitionTime":"2026-01-26T14:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.469029 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.469087 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.469099 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.469115 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.469129 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:15Z","lastTransitionTime":"2026-01-26T14:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.560255 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:15 crc kubenswrapper[4823]: E0126 14:48:15.560417 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.571414 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.571492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.571505 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.571519 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.571529 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:15Z","lastTransitionTime":"2026-01-26T14:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.590221 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 14:45:01.762113864 +0000 UTC Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.673376 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.673408 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.673417 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.673429 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.673439 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:15Z","lastTransitionTime":"2026-01-26T14:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.775307 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.775344 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.775353 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.775381 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.775439 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:15Z","lastTransitionTime":"2026-01-26T14:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.878171 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.878222 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.878231 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.878245 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.878254 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:15Z","lastTransitionTime":"2026-01-26T14:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.980864 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.980904 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.980915 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.980933 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:15 crc kubenswrapper[4823]: I0126 14:48:15.980944 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:15Z","lastTransitionTime":"2026-01-26T14:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.083047 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.083125 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.083138 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.083159 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.083172 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:16Z","lastTransitionTime":"2026-01-26T14:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.186490 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.186593 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.186625 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.186655 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.186676 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:16Z","lastTransitionTime":"2026-01-26T14:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.290251 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.290314 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.290328 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.290347 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.290372 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:16Z","lastTransitionTime":"2026-01-26T14:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.392887 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.392925 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.392934 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.392956 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.392969 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:16Z","lastTransitionTime":"2026-01-26T14:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.494492 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.494524 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.494532 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.494545 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.494554 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:16Z","lastTransitionTime":"2026-01-26T14:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.559294 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.559321 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:16 crc kubenswrapper[4823]: E0126 14:48:16.559453 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.559458 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:16 crc kubenswrapper[4823]: E0126 14:48:16.559554 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:16 crc kubenswrapper[4823]: E0126 14:48:16.560628 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.590519 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:48:15.612057502 +0000 UTC Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.597389 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.597440 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.597455 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.597474 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.597486 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:16Z","lastTransitionTime":"2026-01-26T14:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.700393 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.700428 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.700439 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.700457 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.700469 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:16Z","lastTransitionTime":"2026-01-26T14:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.803969 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.804033 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.804045 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.804065 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.804079 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:16Z","lastTransitionTime":"2026-01-26T14:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.907896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.907945 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.907955 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.907972 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:16 crc kubenswrapper[4823]: I0126 14:48:16.907984 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:16Z","lastTransitionTime":"2026-01-26T14:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.010742 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.010788 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.010799 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.010816 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.010827 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:17Z","lastTransitionTime":"2026-01-26T14:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.114356 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.114443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.114459 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.114484 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.114500 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:17Z","lastTransitionTime":"2026-01-26T14:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.217677 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.217732 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.217746 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.217767 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.217781 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:17Z","lastTransitionTime":"2026-01-26T14:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.322910 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.323487 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.323687 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.323849 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.323995 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:17Z","lastTransitionTime":"2026-01-26T14:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.427343 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.427456 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.427475 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.427515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.427536 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:17Z","lastTransitionTime":"2026-01-26T14:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.530582 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.530673 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.530694 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.530732 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.530758 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:17Z","lastTransitionTime":"2026-01-26T14:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.559975 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:17 crc kubenswrapper[4823]: E0126 14:48:17.560275 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.591171 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 12:12:00.235222443 +0000 UTC Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.634399 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.634476 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.634496 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.634525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.634546 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:17Z","lastTransitionTime":"2026-01-26T14:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.737896 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.737953 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.737965 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.737984 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.737999 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:17Z","lastTransitionTime":"2026-01-26T14:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.842679 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.842741 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.842761 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.842821 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.842845 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:17Z","lastTransitionTime":"2026-01-26T14:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.946443 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.946506 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.946525 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.946553 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:17 crc kubenswrapper[4823]: I0126 14:48:17.946572 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:17Z","lastTransitionTime":"2026-01-26T14:48:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.655585 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:18 crc kubenswrapper[4823]: E0126 14:48:18.655746 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.655944 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.655985 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:18 crc kubenswrapper[4823]: E0126 14:48:18.656035 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:18 crc kubenswrapper[4823]: E0126 14:48:18.656178 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.657554 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.657573 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.657581 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.657592 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.657609 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:18Z","lastTransitionTime":"2026-01-26T14:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.657929 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:34:54.537602868 +0000 UTC Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.759625 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.759651 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.759659 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.759673 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.759681 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:18Z","lastTransitionTime":"2026-01-26T14:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.862119 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.862165 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.862179 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.862200 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.862212 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:18Z","lastTransitionTime":"2026-01-26T14:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.964603 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.964643 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.964655 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.964672 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:18 crc kubenswrapper[4823]: I0126 14:48:18.964683 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:18Z","lastTransitionTime":"2026-01-26T14:48:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.067841 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.067894 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.067907 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.067928 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.067942 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:19Z","lastTransitionTime":"2026-01-26T14:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.087057 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.087123 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.087137 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.087157 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.087172 4823 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-26T14:48:19Z","lastTransitionTime":"2026-01-26T14:48:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.141303 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm"] Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.141822 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.143596 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.143642 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.144697 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.145883 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.196731 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-d69wh" podStartSLOduration=72.196700301 podStartE2EDuration="1m12.196700301s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:19.183041009 +0000 UTC m=+95.868504114" watchObservedRunningTime="2026-01-26 14:48:19.196700301 +0000 UTC m=+95.882163406" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.216182 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podStartSLOduration=72.216150753 podStartE2EDuration="1m12.216150753s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:19.197623294 +0000 UTC m=+95.883086399" watchObservedRunningTime="2026-01-26 14:48:19.216150753 +0000 UTC m=+95.901613858" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.216400 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zlr4w" podStartSLOduration=72.216396419 podStartE2EDuration="1m12.216396419s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:19.21526577 +0000 UTC m=+95.900728895" watchObservedRunningTime="2026-01-26 14:48:19.216396419 +0000 UTC m=+95.901859524" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.247593 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-z5x46" podStartSLOduration=71.247569833 podStartE2EDuration="1m11.247569833s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:19.234138966 +0000 UTC m=+95.919602071" watchObservedRunningTime="2026-01-26 14:48:19.247569833 +0000 UTC m=+95.933032938" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.264153 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=77.26413744 podStartE2EDuration="1m17.26413744s" podCreationTimestamp="2026-01-26 14:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:19.262296852 +0000 UTC m=+95.947759967" watchObservedRunningTime="2026-01-26 14:48:19.26413744 +0000 UTC m=+95.949600545" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.264390 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15346600-2853-4be2-b349-9442afe45bdc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.264762 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15346600-2853-4be2-b349-9442afe45bdc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.264861 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/15346600-2853-4be2-b349-9442afe45bdc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.264941 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15346600-2853-4be2-b349-9442afe45bdc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.265044 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/15346600-2853-4be2-b349-9442afe45bdc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.292772 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-p555f" podStartSLOduration=72.292749508 podStartE2EDuration="1m12.292749508s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:19.291427475 +0000 UTC m=+95.976890600" watchObservedRunningTime="2026-01-26 14:48:19.292749508 +0000 UTC m=+95.978212613" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.322125 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.322105836 podStartE2EDuration="1m16.322105836s" podCreationTimestamp="2026-01-26 14:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:19.322013104 +0000 UTC m=+96.007476219" watchObservedRunningTime="2026-01-26 14:48:19.322105836 +0000 UTC m=+96.007568941" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.336393 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=44.336340633 podStartE2EDuration="44.336340633s" podCreationTimestamp="2026-01-26 14:47:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:19.33542692 +0000 UTC m=+96.020890045" watchObservedRunningTime="2026-01-26 14:48:19.336340633 +0000 UTC m=+96.021803738" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.365814 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15346600-2853-4be2-b349-9442afe45bdc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.366491 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15346600-2853-4be2-b349-9442afe45bdc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.366615 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/15346600-2853-4be2-b349-9442afe45bdc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.366727 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15346600-2853-4be2-b349-9442afe45bdc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.366818 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/15346600-2853-4be2-b349-9442afe45bdc-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.366839 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/15346600-2853-4be2-b349-9442afe45bdc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.366999 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/15346600-2853-4be2-b349-9442afe45bdc-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.368070 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/15346600-2853-4be2-b349-9442afe45bdc-service-ca\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.373650 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15346600-2853-4be2-b349-9442afe45bdc-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.399723 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=19.399692487 podStartE2EDuration="19.399692487s" podCreationTimestamp="2026-01-26 14:48:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:19.398814674 +0000 UTC m=+96.084277779" watchObservedRunningTime="2026-01-26 14:48:19.399692487 +0000 UTC m=+96.085155592" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.409070 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/15346600-2853-4be2-b349-9442afe45bdc-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-rf2dm\" (UID: \"15346600-2853-4be2-b349-9442afe45bdc\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.446591 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bfxnx" podStartSLOduration=72.446566346 podStartE2EDuration="1m12.446566346s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:19.446027432 +0000 UTC m=+96.131490547" watchObservedRunningTime="2026-01-26 14:48:19.446566346 +0000 UTC m=+96.132029451" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.471109 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.537854 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=72.53782599 podStartE2EDuration="1m12.53782599s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:19.525532232 +0000 UTC m=+96.210995337" watchObservedRunningTime="2026-01-26 14:48:19.53782599 +0000 UTC m=+96.223289095" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.559527 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:19 crc kubenswrapper[4823]: E0126 14:48:19.559679 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.658842 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 09:51:21.07056587 +0000 UTC Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.658927 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.666750 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" event={"ID":"15346600-2853-4be2-b349-9442afe45bdc","Type":"ContainerStarted","Data":"14345798aabf9d78b731dc0efb51fee83e1d1df44dde00b0737b4200a01a79ef"} Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.667642 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" event={"ID":"15346600-2853-4be2-b349-9442afe45bdc","Type":"ContainerStarted","Data":"0eaacf0ac7d0e0eb494cae9556bbd05125676a1c705ccfadeab5c79077a71979"} Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.673102 4823 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 26 14:48:19 crc kubenswrapper[4823]: I0126 14:48:19.685295 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-rf2dm" podStartSLOduration=72.685262212 podStartE2EDuration="1m12.685262212s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:19.6851896 +0000 UTC m=+96.370652705" watchObservedRunningTime="2026-01-26 14:48:19.685262212 +0000 UTC m=+96.370725317" Jan 26 14:48:20 crc kubenswrapper[4823]: I0126 14:48:20.560265 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:20 crc kubenswrapper[4823]: I0126 14:48:20.560265 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:20 crc kubenswrapper[4823]: I0126 14:48:20.560276 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:20 crc kubenswrapper[4823]: E0126 14:48:20.560424 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:20 crc kubenswrapper[4823]: E0126 14:48:20.560491 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:20 crc kubenswrapper[4823]: E0126 14:48:20.560556 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:21 crc kubenswrapper[4823]: I0126 14:48:21.559752 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:21 crc kubenswrapper[4823]: E0126 14:48:21.560186 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:21 crc kubenswrapper[4823]: I0126 14:48:21.560559 4823 scope.go:117] "RemoveContainer" containerID="f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd" Jan 26 14:48:21 crc kubenswrapper[4823]: E0126 14:48:21.560780 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" Jan 26 14:48:22 crc kubenswrapper[4823]: I0126 14:48:22.559339 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:22 crc kubenswrapper[4823]: I0126 14:48:22.559436 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:22 crc kubenswrapper[4823]: I0126 14:48:22.559443 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:22 crc kubenswrapper[4823]: E0126 14:48:22.559545 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:22 crc kubenswrapper[4823]: E0126 14:48:22.559652 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:22 crc kubenswrapper[4823]: E0126 14:48:22.559728 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:23 crc kubenswrapper[4823]: I0126 14:48:23.559950 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:23 crc kubenswrapper[4823]: E0126 14:48:23.560870 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:24 crc kubenswrapper[4823]: I0126 14:48:24.559419 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:24 crc kubenswrapper[4823]: I0126 14:48:24.559435 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:24 crc kubenswrapper[4823]: E0126 14:48:24.559837 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:24 crc kubenswrapper[4823]: I0126 14:48:24.559435 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:24 crc kubenswrapper[4823]: E0126 14:48:24.560081 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:24 crc kubenswrapper[4823]: E0126 14:48:24.560256 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:25 crc kubenswrapper[4823]: I0126 14:48:25.560220 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:25 crc kubenswrapper[4823]: E0126 14:48:25.560400 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:25 crc kubenswrapper[4823]: I0126 14:48:25.838292 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:25 crc kubenswrapper[4823]: E0126 14:48:25.838432 4823 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:48:25 crc kubenswrapper[4823]: E0126 14:48:25.838481 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs podName:35318be8-9029-4606-8a04-feec32098d9c nodeName:}" failed. No retries permitted until 2026-01-26 14:49:29.838466098 +0000 UTC m=+166.523929203 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs") pod "network-metrics-daemon-dh4f9" (UID: "35318be8-9029-4606-8a04-feec32098d9c") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 26 14:48:26 crc kubenswrapper[4823]: I0126 14:48:26.559419 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:26 crc kubenswrapper[4823]: E0126 14:48:26.559612 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:26 crc kubenswrapper[4823]: I0126 14:48:26.560628 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:26 crc kubenswrapper[4823]: E0126 14:48:26.560873 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:26 crc kubenswrapper[4823]: I0126 14:48:26.560925 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:26 crc kubenswrapper[4823]: E0126 14:48:26.561073 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:27 crc kubenswrapper[4823]: I0126 14:48:27.559814 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:27 crc kubenswrapper[4823]: E0126 14:48:27.559978 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:28 crc kubenswrapper[4823]: I0126 14:48:28.559498 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:28 crc kubenswrapper[4823]: I0126 14:48:28.559506 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:28 crc kubenswrapper[4823]: I0126 14:48:28.559528 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:28 crc kubenswrapper[4823]: E0126 14:48:28.559888 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:28 crc kubenswrapper[4823]: E0126 14:48:28.559990 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:28 crc kubenswrapper[4823]: E0126 14:48:28.559708 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:29 crc kubenswrapper[4823]: I0126 14:48:29.560097 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:29 crc kubenswrapper[4823]: E0126 14:48:29.560841 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:30 crc kubenswrapper[4823]: I0126 14:48:30.559466 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:30 crc kubenswrapper[4823]: I0126 14:48:30.559520 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:30 crc kubenswrapper[4823]: I0126 14:48:30.559642 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:30 crc kubenswrapper[4823]: E0126 14:48:30.559727 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:30 crc kubenswrapper[4823]: E0126 14:48:30.559889 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:30 crc kubenswrapper[4823]: E0126 14:48:30.560118 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:31 crc kubenswrapper[4823]: I0126 14:48:31.559461 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:31 crc kubenswrapper[4823]: E0126 14:48:31.559937 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:32 crc kubenswrapper[4823]: I0126 14:48:32.559820 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:32 crc kubenswrapper[4823]: I0126 14:48:32.559860 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:32 crc kubenswrapper[4823]: I0126 14:48:32.559874 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:32 crc kubenswrapper[4823]: E0126 14:48:32.560533 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:32 crc kubenswrapper[4823]: E0126 14:48:32.560653 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:32 crc kubenswrapper[4823]: E0126 14:48:32.560730 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:33 crc kubenswrapper[4823]: I0126 14:48:33.559336 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:33 crc kubenswrapper[4823]: E0126 14:48:33.561094 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:34 crc kubenswrapper[4823]: I0126 14:48:34.559602 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:34 crc kubenswrapper[4823]: I0126 14:48:34.559687 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:34 crc kubenswrapper[4823]: I0126 14:48:34.559739 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:34 crc kubenswrapper[4823]: E0126 14:48:34.560217 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:34 crc kubenswrapper[4823]: E0126 14:48:34.560355 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:34 crc kubenswrapper[4823]: I0126 14:48:34.560567 4823 scope.go:117] "RemoveContainer" containerID="f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd" Jan 26 14:48:34 crc kubenswrapper[4823]: E0126 14:48:34.560679 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:34 crc kubenswrapper[4823]: E0126 14:48:34.560734 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-kpz7g_openshift-ovn-kubernetes(232a66a2-55bb-44f6-81a0-383432fbf1d5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" Jan 26 14:48:35 crc kubenswrapper[4823]: I0126 14:48:35.560579 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:35 crc kubenswrapper[4823]: E0126 14:48:35.560870 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:36 crc kubenswrapper[4823]: I0126 14:48:36.559993 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:36 crc kubenswrapper[4823]: I0126 14:48:36.560076 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:36 crc kubenswrapper[4823]: E0126 14:48:36.560120 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:36 crc kubenswrapper[4823]: E0126 14:48:36.560205 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:36 crc kubenswrapper[4823]: I0126 14:48:36.559993 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:36 crc kubenswrapper[4823]: E0126 14:48:36.560283 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:37 crc kubenswrapper[4823]: I0126 14:48:37.559706 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:37 crc kubenswrapper[4823]: E0126 14:48:37.559973 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:38 crc kubenswrapper[4823]: I0126 14:48:38.559763 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:38 crc kubenswrapper[4823]: I0126 14:48:38.559989 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:38 crc kubenswrapper[4823]: I0126 14:48:38.560095 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:38 crc kubenswrapper[4823]: E0126 14:48:38.560347 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:38 crc kubenswrapper[4823]: E0126 14:48:38.560452 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:38 crc kubenswrapper[4823]: E0126 14:48:38.560537 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:39 crc kubenswrapper[4823]: I0126 14:48:39.559691 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:39 crc kubenswrapper[4823]: E0126 14:48:39.559940 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:40 crc kubenswrapper[4823]: I0126 14:48:40.560227 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:40 crc kubenswrapper[4823]: I0126 14:48:40.560312 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:40 crc kubenswrapper[4823]: I0126 14:48:40.560228 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:40 crc kubenswrapper[4823]: E0126 14:48:40.560482 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:40 crc kubenswrapper[4823]: E0126 14:48:40.560551 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:40 crc kubenswrapper[4823]: E0126 14:48:40.560592 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:41 crc kubenswrapper[4823]: I0126 14:48:41.559503 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:41 crc kubenswrapper[4823]: E0126 14:48:41.560037 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:42 crc kubenswrapper[4823]: I0126 14:48:42.559804 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:42 crc kubenswrapper[4823]: I0126 14:48:42.559824 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:42 crc kubenswrapper[4823]: I0126 14:48:42.559829 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:42 crc kubenswrapper[4823]: E0126 14:48:42.560136 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:42 crc kubenswrapper[4823]: E0126 14:48:42.560268 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:42 crc kubenswrapper[4823]: E0126 14:48:42.560427 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:42 crc kubenswrapper[4823]: I0126 14:48:42.749327 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p555f_6e7853ce-0557-452f-b7ae-cc549bf8e2ae/kube-multus/1.log" Jan 26 14:48:42 crc kubenswrapper[4823]: I0126 14:48:42.749867 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p555f_6e7853ce-0557-452f-b7ae-cc549bf8e2ae/kube-multus/0.log" Jan 26 14:48:42 crc kubenswrapper[4823]: I0126 14:48:42.749913 4823 generic.go:334] "Generic (PLEG): container finished" podID="6e7853ce-0557-452f-b7ae-cc549bf8e2ae" containerID="3d9997e1c384fff7560bd4f45dcbc44a289ddc562c7c9784cda8b253e6d0d060" exitCode=1 Jan 26 14:48:42 crc kubenswrapper[4823]: I0126 14:48:42.749945 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p555f" event={"ID":"6e7853ce-0557-452f-b7ae-cc549bf8e2ae","Type":"ContainerDied","Data":"3d9997e1c384fff7560bd4f45dcbc44a289ddc562c7c9784cda8b253e6d0d060"} Jan 26 14:48:42 crc kubenswrapper[4823]: I0126 14:48:42.749986 4823 scope.go:117] "RemoveContainer" containerID="f61b2a88f596f25734204e5f3774d5fa481608bb97239c785063afc042380aba" Jan 26 14:48:42 crc kubenswrapper[4823]: I0126 14:48:42.750516 4823 scope.go:117] "RemoveContainer" containerID="3d9997e1c384fff7560bd4f45dcbc44a289ddc562c7c9784cda8b253e6d0d060" Jan 26 14:48:42 crc kubenswrapper[4823]: E0126 14:48:42.750692 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-p555f_openshift-multus(6e7853ce-0557-452f-b7ae-cc549bf8e2ae)\"" pod="openshift-multus/multus-p555f" podUID="6e7853ce-0557-452f-b7ae-cc549bf8e2ae" Jan 26 14:48:43 crc kubenswrapper[4823]: I0126 14:48:43.560153 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:43 crc kubenswrapper[4823]: E0126 14:48:43.561285 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:43 crc kubenswrapper[4823]: E0126 14:48:43.599555 4823 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 26 14:48:43 crc kubenswrapper[4823]: I0126 14:48:43.755964 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p555f_6e7853ce-0557-452f-b7ae-cc549bf8e2ae/kube-multus/1.log" Jan 26 14:48:43 crc kubenswrapper[4823]: E0126 14:48:43.848182 4823 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 14:48:44 crc kubenswrapper[4823]: I0126 14:48:44.560318 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:44 crc kubenswrapper[4823]: E0126 14:48:44.560480 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:44 crc kubenswrapper[4823]: I0126 14:48:44.560603 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:44 crc kubenswrapper[4823]: I0126 14:48:44.560629 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:44 crc kubenswrapper[4823]: E0126 14:48:44.560738 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:44 crc kubenswrapper[4823]: E0126 14:48:44.560820 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:45 crc kubenswrapper[4823]: I0126 14:48:45.559948 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:45 crc kubenswrapper[4823]: E0126 14:48:45.560192 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:46 crc kubenswrapper[4823]: I0126 14:48:46.560507 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:46 crc kubenswrapper[4823]: E0126 14:48:46.560838 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:46 crc kubenswrapper[4823]: I0126 14:48:46.561600 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:46 crc kubenswrapper[4823]: E0126 14:48:46.561728 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:46 crc kubenswrapper[4823]: I0126 14:48:46.561784 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:46 crc kubenswrapper[4823]: E0126 14:48:46.561869 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:47 crc kubenswrapper[4823]: I0126 14:48:47.560780 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:47 crc kubenswrapper[4823]: E0126 14:48:47.561477 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:47 crc kubenswrapper[4823]: I0126 14:48:47.562114 4823 scope.go:117] "RemoveContainer" containerID="f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd" Jan 26 14:48:48 crc kubenswrapper[4823]: I0126 14:48:48.559780 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:48 crc kubenswrapper[4823]: E0126 14:48:48.560232 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:48 crc kubenswrapper[4823]: I0126 14:48:48.559903 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:48 crc kubenswrapper[4823]: E0126 14:48:48.560344 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:48 crc kubenswrapper[4823]: I0126 14:48:48.559806 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:48 crc kubenswrapper[4823]: E0126 14:48:48.560698 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:48 crc kubenswrapper[4823]: I0126 14:48:48.654772 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dh4f9"] Jan 26 14:48:48 crc kubenswrapper[4823]: I0126 14:48:48.654875 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:48 crc kubenswrapper[4823]: E0126 14:48:48.654959 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:48 crc kubenswrapper[4823]: I0126 14:48:48.775172 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/3.log" Jan 26 14:48:48 crc kubenswrapper[4823]: I0126 14:48:48.777517 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerStarted","Data":"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08"} Jan 26 14:48:48 crc kubenswrapper[4823]: I0126 14:48:48.778010 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:48:48 crc kubenswrapper[4823]: I0126 14:48:48.804173 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podStartSLOduration=101.80415528 podStartE2EDuration="1m41.80415528s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:48:48.803781351 +0000 UTC m=+125.489244456" watchObservedRunningTime="2026-01-26 14:48:48.80415528 +0000 UTC m=+125.489618385" Jan 26 14:48:48 crc kubenswrapper[4823]: E0126 14:48:48.849671 4823 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 14:48:50 crc kubenswrapper[4823]: I0126 14:48:50.559216 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:50 crc kubenswrapper[4823]: I0126 14:48:50.559400 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:50 crc kubenswrapper[4823]: E0126 14:48:50.559554 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:50 crc kubenswrapper[4823]: I0126 14:48:50.559724 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:50 crc kubenswrapper[4823]: E0126 14:48:50.559806 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:50 crc kubenswrapper[4823]: E0126 14:48:50.560044 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:50 crc kubenswrapper[4823]: I0126 14:48:50.560259 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:50 crc kubenswrapper[4823]: E0126 14:48:50.560572 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:52 crc kubenswrapper[4823]: I0126 14:48:52.560106 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:52 crc kubenswrapper[4823]: I0126 14:48:52.560158 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:52 crc kubenswrapper[4823]: I0126 14:48:52.560102 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:52 crc kubenswrapper[4823]: I0126 14:48:52.560329 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:52 crc kubenswrapper[4823]: E0126 14:48:52.560261 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:52 crc kubenswrapper[4823]: E0126 14:48:52.560523 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:52 crc kubenswrapper[4823]: E0126 14:48:52.560706 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:52 crc kubenswrapper[4823]: E0126 14:48:52.560799 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:53 crc kubenswrapper[4823]: E0126 14:48:53.850387 4823 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 14:48:54 crc kubenswrapper[4823]: I0126 14:48:54.560070 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:54 crc kubenswrapper[4823]: I0126 14:48:54.560156 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:54 crc kubenswrapper[4823]: I0126 14:48:54.560152 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:54 crc kubenswrapper[4823]: I0126 14:48:54.560183 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:54 crc kubenswrapper[4823]: E0126 14:48:54.560302 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:54 crc kubenswrapper[4823]: E0126 14:48:54.560471 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:54 crc kubenswrapper[4823]: E0126 14:48:54.560572 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:54 crc kubenswrapper[4823]: E0126 14:48:54.560691 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:56 crc kubenswrapper[4823]: I0126 14:48:56.560126 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:56 crc kubenswrapper[4823]: I0126 14:48:56.560203 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:56 crc kubenswrapper[4823]: I0126 14:48:56.560126 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:56 crc kubenswrapper[4823]: E0126 14:48:56.560345 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:56 crc kubenswrapper[4823]: E0126 14:48:56.560477 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:56 crc kubenswrapper[4823]: I0126 14:48:56.560485 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:56 crc kubenswrapper[4823]: E0126 14:48:56.560586 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:56 crc kubenswrapper[4823]: E0126 14:48:56.560730 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:57 crc kubenswrapper[4823]: I0126 14:48:57.560691 4823 scope.go:117] "RemoveContainer" containerID="3d9997e1c384fff7560bd4f45dcbc44a289ddc562c7c9784cda8b253e6d0d060" Jan 26 14:48:58 crc kubenswrapper[4823]: I0126 14:48:58.559590 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:48:58 crc kubenswrapper[4823]: I0126 14:48:58.559629 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:48:58 crc kubenswrapper[4823]: E0126 14:48:58.559750 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:48:58 crc kubenswrapper[4823]: E0126 14:48:58.559881 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:48:58 crc kubenswrapper[4823]: I0126 14:48:58.560179 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:48:58 crc kubenswrapper[4823]: I0126 14:48:58.560238 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:48:58 crc kubenswrapper[4823]: E0126 14:48:58.560542 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:48:58 crc kubenswrapper[4823]: E0126 14:48:58.560551 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:48:58 crc kubenswrapper[4823]: I0126 14:48:58.817917 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p555f_6e7853ce-0557-452f-b7ae-cc549bf8e2ae/kube-multus/1.log" Jan 26 14:48:58 crc kubenswrapper[4823]: I0126 14:48:58.817990 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p555f" event={"ID":"6e7853ce-0557-452f-b7ae-cc549bf8e2ae","Type":"ContainerStarted","Data":"25e57a64a9bcd0d85710f61af7e99512530bf816f608ba70b91b03589278eb4f"} Jan 26 14:48:58 crc kubenswrapper[4823]: E0126 14:48:58.852673 4823 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 26 14:49:00 crc kubenswrapper[4823]: I0126 14:49:00.559463 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:49:00 crc kubenswrapper[4823]: I0126 14:49:00.559540 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:49:00 crc kubenswrapper[4823]: I0126 14:49:00.559491 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:49:00 crc kubenswrapper[4823]: I0126 14:49:00.559463 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:49:00 crc kubenswrapper[4823]: E0126 14:49:00.559731 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:49:00 crc kubenswrapper[4823]: E0126 14:49:00.559818 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:49:00 crc kubenswrapper[4823]: E0126 14:49:00.559945 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:49:00 crc kubenswrapper[4823]: E0126 14:49:00.560125 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:49:02 crc kubenswrapper[4823]: I0126 14:49:02.359037 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:49:02 crc kubenswrapper[4823]: I0126 14:49:02.560033 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:49:02 crc kubenswrapper[4823]: I0126 14:49:02.560031 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:49:02 crc kubenswrapper[4823]: E0126 14:49:02.560764 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 26 14:49:02 crc kubenswrapper[4823]: I0126 14:49:02.560102 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:49:02 crc kubenswrapper[4823]: I0126 14:49:02.560088 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:49:02 crc kubenswrapper[4823]: E0126 14:49:02.560872 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-dh4f9" podUID="35318be8-9029-4606-8a04-feec32098d9c" Jan 26 14:49:02 crc kubenswrapper[4823]: E0126 14:49:02.561038 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 26 14:49:02 crc kubenswrapper[4823]: E0126 14:49:02.561249 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 26 14:49:04 crc kubenswrapper[4823]: I0126 14:49:04.559422 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:49:04 crc kubenswrapper[4823]: I0126 14:49:04.559496 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:49:04 crc kubenswrapper[4823]: I0126 14:49:04.559495 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:49:04 crc kubenswrapper[4823]: I0126 14:49:04.559455 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:49:04 crc kubenswrapper[4823]: I0126 14:49:04.562442 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 14:49:04 crc kubenswrapper[4823]: I0126 14:49:04.562910 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 14:49:04 crc kubenswrapper[4823]: I0126 14:49:04.562942 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 14:49:04 crc kubenswrapper[4823]: I0126 14:49:04.563165 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 14:49:04 crc kubenswrapper[4823]: I0126 14:49:04.563264 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 14:49:04 crc kubenswrapper[4823]: I0126 14:49:04.566551 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.068515 4823 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.162723 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d8kxw"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.163272 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.169001 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.169089 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.169316 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.169417 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.169731 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.170042 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.170078 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.170275 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.171007 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.171263 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5hr6d"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.175099 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.176453 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.184076 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.186681 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.187052 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.187454 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.188064 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.188352 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.188547 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.188782 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.206054 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4m2g6"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.206729 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.207110 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v7zhj"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.207675 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.207791 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.208287 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.208385 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.209033 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.210118 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.210506 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.220206 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6vd9x"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.221137 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.223989 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.224124 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.224821 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.226052 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.226320 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.226526 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.226668 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sd92l"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.227447 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.227867 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.227972 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cdl85"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.228034 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.228427 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.228589 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.228830 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.234823 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.235263 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.235490 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.237273 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.237789 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.238614 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5b7zm"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.238997 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-z4f2q"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.239413 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.239523 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-z4f2q" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.239803 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5b7zm" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.245530 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-bbxp2"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.246251 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.246511 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d8kxw"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.247437 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pbvlk"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.247970 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.250930 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.251225 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.267553 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.267834 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.267958 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.270459 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.270886 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.271078 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.271199 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.271241 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.271230 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.271328 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272186 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272207 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272306 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272304 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272436 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272500 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272568 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272641 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.279272 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272715 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272753 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272819 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.279687 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272842 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272900 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272920 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272971 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.272997 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273054 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273080 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273122 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273152 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273203 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273219 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273311 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273424 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273519 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273597 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273655 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273663 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.273743 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.274040 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.274132 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.274738 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.274994 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.275057 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.275274 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.275387 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.275478 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.275540 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.275883 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.275971 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.276024 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.276421 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.277710 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.313423 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.316553 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.316710 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.316739 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.316786 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.316576 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.316925 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.316958 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317465 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljl56\" (UniqueName: \"kubernetes.io/projected/6a33769f-089d-482c-bb7c-5569c4a078a7-kube-api-access-ljl56\") pod \"openshift-apiserver-operator-796bbdcf4f-4kchq\" (UID: \"6a33769f-089d-482c-bb7c-5569c4a078a7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317504 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317527 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cdl85\" (UID: \"11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317543 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317560 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317591 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcxfc\" (UniqueName: \"kubernetes.io/projected/4609bcb4-b5ef-43fa-85be-2d897f635951-kube-api-access-kcxfc\") pod \"downloads-7954f5f757-5b7zm\" (UID: \"4609bcb4-b5ef-43fa-85be-2d897f635951\") " pod="openshift-console/downloads-7954f5f757-5b7zm" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317609 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eea18e5-bc89-4c10-a843-c8b374a239a2-serving-cert\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317626 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjnq4\" (UniqueName: \"kubernetes.io/projected/11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a-kube-api-access-mjnq4\") pod \"openshift-config-operator-7777fb866f-cdl85\" (UID: \"11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317643 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-audit-policies\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317666 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/204a2df3-b8d7-4998-8ff1-3c3a6112c666-images\") pod \"machine-api-operator-5694c8668f-4m2g6\" (UID: \"204a2df3-b8d7-4998-8ff1-3c3a6112c666\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317684 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00f5968d-4a95-44bc-9633-0bb7844b3bfb-metrics-tls\") pod \"dns-operator-744455d44c-z4f2q\" (UID: \"00f5968d-4a95-44bc-9633-0bb7844b3bfb\") " pod="openshift-dns-operator/dns-operator-744455d44c-z4f2q" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317701 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-config\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317717 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-audit-dir\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317735 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rd8n\" (UniqueName: \"kubernetes.io/projected/00f5968d-4a95-44bc-9633-0bb7844b3bfb-kube-api-access-7rd8n\") pod \"dns-operator-744455d44c-z4f2q\" (UID: \"00f5968d-4a95-44bc-9633-0bb7844b3bfb\") " pod="openshift-dns-operator/dns-operator-744455d44c-z4f2q" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317754 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttvsc\" (UniqueName: \"kubernetes.io/projected/ab42db39-920c-4bd5-b524-d3c649e24f67-kube-api-access-ttvsc\") pod \"machine-approver-56656f9798-rww8p\" (UID: \"ab42db39-920c-4bd5-b524-d3c649e24f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317772 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317787 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2f8927c-1301-492d-ae9a-487ec70b3038-serving-cert\") pod \"route-controller-manager-6576b87f9c-tdvm4\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317803 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d241e4dc-13ab-49d4-99a0-9fa3d654cb0f-config\") pod \"console-operator-58897d9998-sd92l\" (UID: \"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f\") " pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317821 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f8927c-1301-492d-ae9a-487ec70b3038-config\") pod \"route-controller-manager-6576b87f9c-tdvm4\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317836 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-serving-cert\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317851 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-node-pullsecrets\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317866 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-config\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317884 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/204a2df3-b8d7-4998-8ff1-3c3a6112c666-config\") pod \"machine-api-operator-5694c8668f-4m2g6\" (UID: \"204a2df3-b8d7-4998-8ff1-3c3a6112c666\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317899 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-etcd-client\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317925 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b942d06c-fac8-4546-98a6-f36d0666d0d4-audit-dir\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317942 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-audit\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.317974 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-audit-dir\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318009 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-serving-cert\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318056 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318077 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318101 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-etcd-serving-ca\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318132 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318161 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a-serving-cert\") pod \"openshift-config-operator-7777fb866f-cdl85\" (UID: \"11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318186 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/204a2df3-b8d7-4998-8ff1-3c3a6112c666-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4m2g6\" (UID: \"204a2df3-b8d7-4998-8ff1-3c3a6112c666\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318239 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a33769f-089d-482c-bb7c-5569c4a078a7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4kchq\" (UID: \"6a33769f-089d-482c-bb7c-5569c4a078a7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318269 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318290 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2f8927c-1301-492d-ae9a-487ec70b3038-client-ca\") pod \"route-controller-manager-6576b87f9c-tdvm4\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318313 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4wqp\" (UniqueName: \"kubernetes.io/projected/c2f8927c-1301-492d-ae9a-487ec70b3038-kube-api-access-f4wqp\") pod \"route-controller-manager-6576b87f9c-tdvm4\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318339 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-image-import-ca\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318383 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d241e4dc-13ab-49d4-99a0-9fa3d654cb0f-trusted-ca\") pod \"console-operator-58897d9998-sd92l\" (UID: \"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f\") " pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318415 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-encryption-config\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318430 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318445 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a33769f-089d-482c-bb7c-5569c4a078a7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4kchq\" (UID: \"6a33769f-089d-482c-bb7c-5569c4a078a7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318460 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318476 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-encryption-config\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318499 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-etcd-client\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318515 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318529 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318544 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glbdl\" (UniqueName: \"kubernetes.io/projected/7eea18e5-bc89-4c10-a843-c8b374a239a2-kube-api-access-glbdl\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318559 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxttg\" (UniqueName: \"kubernetes.io/projected/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-kube-api-access-zxttg\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318577 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab42db39-920c-4bd5-b524-d3c649e24f67-config\") pod \"machine-approver-56656f9798-rww8p\" (UID: \"ab42db39-920c-4bd5-b524-d3c649e24f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318620 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ab42db39-920c-4bd5-b524-d3c649e24f67-machine-approver-tls\") pod \"machine-approver-56656f9798-rww8p\" (UID: \"ab42db39-920c-4bd5-b524-d3c649e24f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318641 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318656 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318674 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt69d\" (UniqueName: \"kubernetes.io/projected/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-kube-api-access-xt69d\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318690 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-config\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318707 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmvsl\" (UniqueName: \"kubernetes.io/projected/d241e4dc-13ab-49d4-99a0-9fa3d654cb0f-kube-api-access-gmvsl\") pod \"console-operator-58897d9998-sd92l\" (UID: \"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f\") " pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318723 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d241e4dc-13ab-49d4-99a0-9fa3d654cb0f-serving-cert\") pod \"console-operator-58897d9998-sd92l\" (UID: \"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f\") " pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318737 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-serving-cert\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318824 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318843 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318936 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-service-ca-bundle\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.318961 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-audit-policies\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.319046 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ab42db39-920c-4bd5-b524-d3c649e24f67-auth-proxy-config\") pod \"machine-approver-56656f9798-rww8p\" (UID: \"ab42db39-920c-4bd5-b524-d3c649e24f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.319100 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcdqx\" (UniqueName: \"kubernetes.io/projected/204a2df3-b8d7-4998-8ff1-3c3a6112c666-kube-api-access-pcdqx\") pod \"machine-api-operator-5694c8668f-4m2g6\" (UID: \"204a2df3-b8d7-4998-8ff1-3c3a6112c666\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.319130 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-client-ca\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.319188 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-542g6\" (UniqueName: \"kubernetes.io/projected/b942d06c-fac8-4546-98a6-f36d0666d0d4-kube-api-access-542g6\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.319207 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dqgk\" (UniqueName: \"kubernetes.io/projected/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-kube-api-access-2dqgk\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.319419 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.319727 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.319796 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.321158 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.321748 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.322140 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.322410 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.322448 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.322411 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.323196 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.325128 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.327599 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.328432 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.330238 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-p7srw"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.336029 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.336332 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.336511 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.336874 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.336888 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.341339 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.341650 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.345196 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5hr6d"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.345246 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5vlmk"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.345611 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.346016 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.346334 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.346383 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.347754 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.348145 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.348748 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.349493 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.350110 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.353843 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.354339 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.354933 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.355153 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.363501 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.364721 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.367678 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.368155 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nnkhw"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.369296 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nnkhw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.373461 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m7qhz"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.374421 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.377161 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-clfjm"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.378973 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-clfjm" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.379619 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.380719 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.385896 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g9nns"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.391519 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.391751 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.393923 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.404234 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.407803 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.409338 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-km977"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.412021 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-km977" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.412428 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.413153 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.417548 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.418240 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-v7ff8"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.418655 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sd92l"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.418686 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v7ff8" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.418949 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.419865 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420039 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a33769f-089d-482c-bb7c-5569c4a078a7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4kchq\" (UID: \"6a33769f-089d-482c-bb7c-5569c4a078a7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420058 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2f8927c-1301-492d-ae9a-487ec70b3038-client-ca\") pod \"route-controller-manager-6576b87f9c-tdvm4\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420073 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4wqp\" (UniqueName: \"kubernetes.io/projected/c2f8927c-1301-492d-ae9a-487ec70b3038-kube-api-access-f4wqp\") pod \"route-controller-manager-6576b87f9c-tdvm4\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420094 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-image-import-ca\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420110 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420130 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f066caa-2e70-4ef1-ae84-da5b204e0d25-config\") pod \"kube-apiserver-operator-766d6c64bb-llt2m\" (UID: \"0f066caa-2e70-4ef1-ae84-da5b204e0d25\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420146 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-encryption-config\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420161 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420176 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d241e4dc-13ab-49d4-99a0-9fa3d654cb0f-trusted-ca\") pod \"console-operator-58897d9998-sd92l\" (UID: \"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f\") " pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420192 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t95kb\" (UniqueName: \"kubernetes.io/projected/90fe06d2-db90-4559-855a-18be3ede4ad5-kube-api-access-t95kb\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9mhl\" (UID: \"90fe06d2-db90-4559-855a-18be3ede4ad5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420208 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1fce63dc-472e-4a08-b2c0-0228c9f41cc4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dsmld\" (UID: \"1fce63dc-472e-4a08-b2c0-0228c9f41cc4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420223 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-encryption-config\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420238 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xtvs\" (UniqueName: \"kubernetes.io/projected/18f7273c-10d0-4c81-878f-d2ac07b0fb63-kube-api-access-4xtvs\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420252 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-serving-cert\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420271 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a33769f-089d-482c-bb7c-5569c4a078a7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4kchq\" (UID: \"6a33769f-089d-482c-bb7c-5569c4a078a7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420287 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420303 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420319 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-etcd-client\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420334 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kmzm\" (UniqueName: \"kubernetes.io/projected/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-kube-api-access-8kmzm\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420351 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kp4gb\" (UID: \"23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420436 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.420891 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a33769f-089d-482c-bb7c-5569c4a078a7-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4kchq\" (UID: \"6a33769f-089d-482c-bb7c-5569c4a078a7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.421463 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.422869 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.422935 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glbdl\" (UniqueName: \"kubernetes.io/projected/7eea18e5-bc89-4c10-a843-c8b374a239a2-kube-api-access-glbdl\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.422962 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxttg\" (UniqueName: \"kubernetes.io/projected/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-kube-api-access-zxttg\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.422986 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ab42db39-920c-4bd5-b524-d3c649e24f67-machine-approver-tls\") pod \"machine-approver-56656f9798-rww8p\" (UID: \"ab42db39-920c-4bd5-b524-d3c649e24f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423005 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab42db39-920c-4bd5-b524-d3c649e24f67-config\") pod \"machine-approver-56656f9798-rww8p\" (UID: \"ab42db39-920c-4bd5-b524-d3c649e24f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423023 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-oauth-config\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423044 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423135 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423156 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xt69d\" (UniqueName: \"kubernetes.io/projected/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-kube-api-access-xt69d\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423416 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-config\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423445 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f7273c-10d0-4c81-878f-d2ac07b0fb63-stats-auth\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423463 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-config\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423480 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f066caa-2e70-4ef1-ae84-da5b204e0d25-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-llt2m\" (UID: \"0f066caa-2e70-4ef1-ae84-da5b204e0d25\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423502 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmvsl\" (UniqueName: \"kubernetes.io/projected/d241e4dc-13ab-49d4-99a0-9fa3d654cb0f-kube-api-access-gmvsl\") pod \"console-operator-58897d9998-sd92l\" (UID: \"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f\") " pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423572 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423711 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d241e4dc-13ab-49d4-99a0-9fa3d654cb0f-trusted-ca\") pod \"console-operator-58897d9998-sd92l\" (UID: \"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f\") " pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.423766 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4m2g6"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.424190 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.424468 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.424573 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.425283 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.425755 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-serving-cert\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.425886 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-config\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.425904 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d241e4dc-13ab-49d4-99a0-9fa3d654cb0f-serving-cert\") pod \"console-operator-58897d9998-sd92l\" (UID: \"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f\") " pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.425952 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63b17ba3-023d-46ef-9b4e-1936166074bc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zq4h\" (UID: \"63b17ba3-023d-46ef-9b4e-1936166074bc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.425986 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426072 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426113 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-service-ca-bundle\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426140 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-audit-policies\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426251 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f7273c-10d0-4c81-878f-d2ac07b0fb63-metrics-certs\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426281 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f066caa-2e70-4ef1-ae84-da5b204e0d25-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-llt2m\" (UID: \"0f066caa-2e70-4ef1-ae84-da5b204e0d25\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426339 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ab42db39-920c-4bd5-b524-d3c649e24f67-auth-proxy-config\") pod \"machine-approver-56656f9798-rww8p\" (UID: \"ab42db39-920c-4bd5-b524-d3c649e24f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426399 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcdqx\" (UniqueName: \"kubernetes.io/projected/204a2df3-b8d7-4998-8ff1-3c3a6112c666-kube-api-access-pcdqx\") pod \"machine-api-operator-5694c8668f-4m2g6\" (UID: \"204a2df3-b8d7-4998-8ff1-3c3a6112c666\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426435 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab42db39-920c-4bd5-b524-d3c649e24f67-config\") pod \"machine-approver-56656f9798-rww8p\" (UID: \"ab42db39-920c-4bd5-b524-d3c649e24f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426437 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-client-ca\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426492 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qcz2\" (UniqueName: \"kubernetes.io/projected/1fce63dc-472e-4a08-b2c0-0228c9f41cc4-kube-api-access-7qcz2\") pod \"ingress-operator-5b745b69d9-dsmld\" (UID: \"1fce63dc-472e-4a08-b2c0-0228c9f41cc4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426543 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-542g6\" (UniqueName: \"kubernetes.io/projected/b942d06c-fac8-4546-98a6-f36d0666d0d4-kube-api-access-542g6\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426564 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dqgk\" (UniqueName: \"kubernetes.io/projected/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-kube-api-access-2dqgk\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426703 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljl56\" (UniqueName: \"kubernetes.io/projected/6a33769f-089d-482c-bb7c-5569c4a078a7-kube-api-access-ljl56\") pod \"openshift-apiserver-operator-796bbdcf4f-4kchq\" (UID: \"6a33769f-089d-482c-bb7c-5569c4a078a7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426728 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426755 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426808 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae-config\") pod \"kube-controller-manager-operator-78b949d7b-kp4gb\" (UID: \"23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426843 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cdl85\" (UID: \"11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426867 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426909 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f7273c-10d0-4c81-878f-d2ac07b0fb63-service-ca-bundle\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426932 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcxfc\" (UniqueName: \"kubernetes.io/projected/4609bcb4-b5ef-43fa-85be-2d897f635951-kube-api-access-kcxfc\") pod \"downloads-7954f5f757-5b7zm\" (UID: \"4609bcb4-b5ef-43fa-85be-2d897f635951\") " pod="openshift-console/downloads-7954f5f757-5b7zm" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426951 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eea18e5-bc89-4c10-a843-c8b374a239a2-serving-cert\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.426838 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2f8927c-1301-492d-ae9a-487ec70b3038-client-ca\") pod \"route-controller-manager-6576b87f9c-tdvm4\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427127 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fce63dc-472e-4a08-b2c0-0228c9f41cc4-trusted-ca\") pod \"ingress-operator-5b745b69d9-dsmld\" (UID: \"1fce63dc-472e-4a08-b2c0-0228c9f41cc4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427147 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63b17ba3-023d-46ef-9b4e-1936166074bc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zq4h\" (UID: \"63b17ba3-023d-46ef-9b4e-1936166074bc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427173 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/204a2df3-b8d7-4998-8ff1-3c3a6112c666-images\") pod \"machine-api-operator-5694c8668f-4m2g6\" (UID: \"204a2df3-b8d7-4998-8ff1-3c3a6112c666\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427199 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjnq4\" (UniqueName: \"kubernetes.io/projected/11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a-kube-api-access-mjnq4\") pod \"openshift-config-operator-7777fb866f-cdl85\" (UID: \"11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427218 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-audit-policies\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427236 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1fce63dc-472e-4a08-b2c0-0228c9f41cc4-metrics-tls\") pod \"ingress-operator-5b745b69d9-dsmld\" (UID: \"1fce63dc-472e-4a08-b2c0-0228c9f41cc4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427260 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00f5968d-4a95-44bc-9633-0bb7844b3bfb-metrics-tls\") pod \"dns-operator-744455d44c-z4f2q\" (UID: \"00f5968d-4a95-44bc-9633-0bb7844b3bfb\") " pod="openshift-dns-operator/dns-operator-744455d44c-z4f2q" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427280 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-config\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427304 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-audit-dir\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427324 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rd8n\" (UniqueName: \"kubernetes.io/projected/00f5968d-4a95-44bc-9633-0bb7844b3bfb-kube-api-access-7rd8n\") pod \"dns-operator-744455d44c-z4f2q\" (UID: \"00f5968d-4a95-44bc-9633-0bb7844b3bfb\") " pod="openshift-dns-operator/dns-operator-744455d44c-z4f2q" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427524 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttvsc\" (UniqueName: \"kubernetes.io/projected/ab42db39-920c-4bd5-b524-d3c649e24f67-kube-api-access-ttvsc\") pod \"machine-approver-56656f9798-rww8p\" (UID: \"ab42db39-920c-4bd5-b524-d3c649e24f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427540 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-client-ca\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427545 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90fe06d2-db90-4559-855a-18be3ede4ad5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9mhl\" (UID: \"90fe06d2-db90-4559-855a-18be3ede4ad5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427614 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427646 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2f8927c-1301-492d-ae9a-487ec70b3038-serving-cert\") pod \"route-controller-manager-6576b87f9c-tdvm4\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427674 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d241e4dc-13ab-49d4-99a0-9fa3d654cb0f-config\") pod \"console-operator-58897d9998-sd92l\" (UID: \"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f\") " pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427716 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kp4gb\" (UID: \"23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427747 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-node-pullsecrets\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427777 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f8927c-1301-492d-ae9a-487ec70b3038-config\") pod \"route-controller-manager-6576b87f9c-tdvm4\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427804 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-serving-cert\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427834 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-config\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427864 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f7273c-10d0-4c81-878f-d2ac07b0fb63-default-certificate\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.427991 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.428018 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.428034 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-z4f2q"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.429214 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.429454 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.429715 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-audit-policies\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.429752 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ab42db39-920c-4bd5-b524-d3c649e24f67-auth-proxy-config\") pod \"machine-approver-56656f9798-rww8p\" (UID: \"ab42db39-920c-4bd5-b524-d3c649e24f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.430052 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.431508 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-audit-policies\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.431576 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-image-import-ca\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.431867 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-audit-dir\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.432255 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-config\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.432333 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5b7zm"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.432630 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.432649 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/204a2df3-b8d7-4998-8ff1-3c3a6112c666-images\") pod \"machine-api-operator-5694c8668f-4m2g6\" (UID: \"204a2df3-b8d7-4998-8ff1-3c3a6112c666\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.432734 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-config\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.432987 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.433456 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d241e4dc-13ab-49d4-99a0-9fa3d654cb0f-serving-cert\") pod \"console-operator-58897d9998-sd92l\" (UID: \"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f\") " pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.433636 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-encryption-config\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.433972 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-service-ca-bundle\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.434022 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a33769f-089d-482c-bb7c-5569c4a078a7-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4kchq\" (UID: \"6a33769f-089d-482c-bb7c-5569c4a078a7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.434251 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cdl85\" (UID: \"11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.434327 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-node-pullsecrets\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.434908 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d241e4dc-13ab-49d4-99a0-9fa3d654cb0f-config\") pod \"console-operator-58897d9998-sd92l\" (UID: \"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f\") " pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:10 crc kubenswrapper[4823]: E0126 14:49:10.435109 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:51:12.435088993 +0000 UTC m=+269.120552098 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.435291 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f8927c-1301-492d-ae9a-487ec70b3038-config\") pod \"route-controller-manager-6576b87f9c-tdvm4\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.435399 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/204a2df3-b8d7-4998-8ff1-3c3a6112c666-config\") pod \"machine-api-operator-5694c8668f-4m2g6\" (UID: \"204a2df3-b8d7-4998-8ff1-3c3a6112c666\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.436061 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-etcd-client\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.436099 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-oauth-serving-cert\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.436159 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b942d06c-fac8-4546-98a6-f36d0666d0d4-audit-dir\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.436180 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-audit\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.436827 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.436893 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/204a2df3-b8d7-4998-8ff1-3c3a6112c666-config\") pod \"machine-api-operator-5694c8668f-4m2g6\" (UID: \"204a2df3-b8d7-4998-8ff1-3c3a6112c666\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.436949 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-serving-cert\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.436975 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.437381 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-encryption-config\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.437625 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-audit\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.437685 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b942d06c-fac8-4546-98a6-f36d0666d0d4-audit-dir\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.437807 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-audit-dir\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.437814 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.437854 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-service-ca\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.437863 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-audit-dir\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.437922 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-trusted-ca-bundle\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.437961 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-etcd-serving-ca\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.438153 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.438208 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.438409 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90fe06d2-db90-4559-855a-18be3ede4ad5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9mhl\" (UID: \"90fe06d2-db90-4559-855a-18be3ede4ad5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.438533 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63b17ba3-023d-46ef-9b4e-1936166074bc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zq4h\" (UID: \"63b17ba3-023d-46ef-9b4e-1936166074bc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.438583 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.438649 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a-serving-cert\") pod \"openshift-config-operator-7777fb866f-cdl85\" (UID: \"11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.438683 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/204a2df3-b8d7-4998-8ff1-3c3a6112c666-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4m2g6\" (UID: \"204a2df3-b8d7-4998-8ff1-3c3a6112c666\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.438847 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-etcd-serving-ca\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.442764 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.442810 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-serving-cert\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.443084 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-serving-cert\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.443216 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.443268 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-etcd-client\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.443337 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eea18e5-bc89-4c10-a843-c8b374a239a2-serving-cert\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.443521 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.443828 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/00f5968d-4a95-44bc-9633-0bb7844b3bfb-metrics-tls\") pod \"dns-operator-744455d44c-z4f2q\" (UID: \"00f5968d-4a95-44bc-9633-0bb7844b3bfb\") " pod="openshift-dns-operator/dns-operator-744455d44c-z4f2q" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.443908 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ab42db39-920c-4bd5-b524-d3c649e24f67-machine-approver-tls\") pod \"machine-approver-56656f9798-rww8p\" (UID: \"ab42db39-920c-4bd5-b524-d3c649e24f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.444020 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/204a2df3-b8d7-4998-8ff1-3c3a6112c666-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4m2g6\" (UID: \"204a2df3-b8d7-4998-8ff1-3c3a6112c666\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.444548 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.444804 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.444844 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.444912 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-etcd-client\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.445683 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2f8927c-1301-492d-ae9a-487ec70b3038-serving-cert\") pod \"route-controller-manager-6576b87f9c-tdvm4\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.445867 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.446196 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.447305 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.450682 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v7zhj"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.452124 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nnkhw"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.452796 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a-serving-cert\") pod \"openshift-config-operator-7777fb866f-cdl85\" (UID: \"11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.452958 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-serving-cert\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.454479 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.456608 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.458522 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6vd9x"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.458759 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.460111 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.461463 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hkfx2"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.462732 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.462959 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.464082 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bbxp2"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.465637 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.466035 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.466196 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m7qhz"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.467071 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.468080 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cdl85"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.469143 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.470142 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.471183 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.472231 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5vlmk"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.473310 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pbvlk"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.482522 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-km977"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.485582 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-clfjm"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.485769 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.487025 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hkfx2"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.488165 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.489438 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.490783 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g9nns"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.492077 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.493514 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zhskc"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.494588 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.494856 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-c5cgw"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.495433 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-c5cgw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.495828 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zhskc"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.496970 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-c5cgw"] Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.505635 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.526287 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.539497 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-oauth-serving-cert\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.539551 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-service-ca\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.539578 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-trusted-ca-bundle\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.539603 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90fe06d2-db90-4559-855a-18be3ede4ad5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9mhl\" (UID: \"90fe06d2-db90-4559-855a-18be3ede4ad5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.539629 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63b17ba3-023d-46ef-9b4e-1936166074bc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zq4h\" (UID: \"63b17ba3-023d-46ef-9b4e-1936166074bc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.539658 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.539697 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t95kb\" (UniqueName: \"kubernetes.io/projected/90fe06d2-db90-4559-855a-18be3ede4ad5-kube-api-access-t95kb\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9mhl\" (UID: \"90fe06d2-db90-4559-855a-18be3ede4ad5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.539723 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1fce63dc-472e-4a08-b2c0-0228c9f41cc4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dsmld\" (UID: \"1fce63dc-472e-4a08-b2c0-0228c9f41cc4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.539745 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f066caa-2e70-4ef1-ae84-da5b204e0d25-config\") pod \"kube-apiserver-operator-766d6c64bb-llt2m\" (UID: \"0f066caa-2e70-4ef1-ae84-da5b204e0d25\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.540054 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xtvs\" (UniqueName: \"kubernetes.io/projected/18f7273c-10d0-4c81-878f-d2ac07b0fb63-kube-api-access-4xtvs\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.540083 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-serving-cert\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.540318 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kmzm\" (UniqueName: \"kubernetes.io/projected/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-kube-api-access-8kmzm\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.540350 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.540386 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-oauth-serving-cert\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.540593 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f066caa-2e70-4ef1-ae84-da5b204e0d25-config\") pod \"kube-apiserver-operator-766d6c64bb-llt2m\" (UID: \"0f066caa-2e70-4ef1-ae84-da5b204e0d25\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.540705 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-trusted-ca-bundle\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.540775 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kp4gb\" (UID: \"23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.540811 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-oauth-config\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.540845 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f7273c-10d0-4c81-878f-d2ac07b0fb63-stats-auth\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.540869 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-config\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.540890 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f066caa-2e70-4ef1-ae84-da5b204e0d25-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-llt2m\" (UID: \"0f066caa-2e70-4ef1-ae84-da5b204e0d25\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.541601 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-service-ca\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.541784 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-config\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.542427 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63b17ba3-023d-46ef-9b4e-1936166074bc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zq4h\" (UID: \"63b17ba3-023d-46ef-9b4e-1936166074bc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.542604 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f066caa-2e70-4ef1-ae84-da5b204e0d25-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-llt2m\" (UID: \"0f066caa-2e70-4ef1-ae84-da5b204e0d25\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.543509 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.543250 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-serving-cert\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.543553 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f7273c-10d0-4c81-878f-d2ac07b0fb63-metrics-certs\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.543603 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qcz2\" (UniqueName: \"kubernetes.io/projected/1fce63dc-472e-4a08-b2c0-0228c9f41cc4-kube-api-access-7qcz2\") pod \"ingress-operator-5b745b69d9-dsmld\" (UID: \"1fce63dc-472e-4a08-b2c0-0228c9f41cc4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.543751 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae-config\") pod \"kube-controller-manager-operator-78b949d7b-kp4gb\" (UID: \"23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.543926 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.543783 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-oauth-config\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.543993 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f7273c-10d0-4c81-878f-d2ac07b0fb63-service-ca-bundle\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.544017 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fce63dc-472e-4a08-b2c0-0228c9f41cc4-trusted-ca\") pod \"ingress-operator-5b745b69d9-dsmld\" (UID: \"1fce63dc-472e-4a08-b2c0-0228c9f41cc4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.544489 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63b17ba3-023d-46ef-9b4e-1936166074bc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zq4h\" (UID: \"63b17ba3-023d-46ef-9b4e-1936166074bc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.544686 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1fce63dc-472e-4a08-b2c0-0228c9f41cc4-metrics-tls\") pod \"ingress-operator-5b745b69d9-dsmld\" (UID: \"1fce63dc-472e-4a08-b2c0-0228c9f41cc4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.544852 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90fe06d2-db90-4559-855a-18be3ede4ad5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9mhl\" (UID: \"90fe06d2-db90-4559-855a-18be3ede4ad5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.544904 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kp4gb\" (UID: \"23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.544977 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae-config\") pod \"kube-controller-manager-operator-78b949d7b-kp4gb\" (UID: \"23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.545444 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f7273c-10d0-4c81-878f-d2ac07b0fb63-default-certificate\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.545573 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.546204 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f066caa-2e70-4ef1-ae84-da5b204e0d25-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-llt2m\" (UID: \"0f066caa-2e70-4ef1-ae84-da5b204e0d25\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.546297 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.548075 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/90fe06d2-db90-4559-855a-18be3ede4ad5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9mhl\" (UID: \"90fe06d2-db90-4559-855a-18be3ede4ad5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.548445 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.549143 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kp4gb\" (UID: \"23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.565911 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.570481 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90fe06d2-db90-4559-855a-18be3ede4ad5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9mhl\" (UID: \"90fe06d2-db90-4559-855a-18be3ede4ad5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.583856 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.586727 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.594236 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.606853 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.612981 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.625706 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.641512 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/1fce63dc-472e-4a08-b2c0-0228c9f41cc4-metrics-tls\") pod \"ingress-operator-5b745b69d9-dsmld\" (UID: \"1fce63dc-472e-4a08-b2c0-0228c9f41cc4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.658654 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.666502 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1fce63dc-472e-4a08-b2c0-0228c9f41cc4-trusted-ca\") pod \"ingress-operator-5b745b69d9-dsmld\" (UID: \"1fce63dc-472e-4a08-b2c0-0228c9f41cc4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.667196 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.707549 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.725798 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.740654 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f7273c-10d0-4c81-878f-d2ac07b0fb63-default-certificate\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.745819 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.765342 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.777078 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f7273c-10d0-4c81-878f-d2ac07b0fb63-service-ca-bundle\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.785716 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.795802 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f7273c-10d0-4c81-878f-d2ac07b0fb63-stats-auth\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.806346 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.827287 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.842163 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f7273c-10d0-4c81-878f-d2ac07b0fb63-metrics-certs\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.845100 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: W0126 14:49:10.853150 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-75bed70296c18c9210a4c5a4d76bc894b88b364c59f354958c26bc2c19cee5b3 WatchSource:0}: Error finding container 75bed70296c18c9210a4c5a4d76bc894b88b364c59f354958c26bc2c19cee5b3: Status 404 returned error can't find the container with id 75bed70296c18c9210a4c5a4d76bc894b88b364c59f354958c26bc2c19cee5b3 Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.859738 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"75bed70296c18c9210a4c5a4d76bc894b88b364c59f354958c26bc2c19cee5b3"} Jan 26 14:49:10 crc kubenswrapper[4823]: W0126 14:49:10.860774 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-023d269fdf389d12f45ed09f142887f3b03e129fab0980c8d5dc598ce03e9262 WatchSource:0}: Error finding container 023d269fdf389d12f45ed09f142887f3b03e129fab0980c8d5dc598ce03e9262: Status 404 returned error can't find the container with id 023d269fdf389d12f45ed09f142887f3b03e129fab0980c8d5dc598ce03e9262 Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.864838 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.885522 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.896134 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63b17ba3-023d-46ef-9b4e-1936166074bc-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zq4h\" (UID: \"63b17ba3-023d-46ef-9b4e-1936166074bc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.905899 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.925627 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.932174 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63b17ba3-023d-46ef-9b4e-1936166074bc-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zq4h\" (UID: \"63b17ba3-023d-46ef-9b4e-1936166074bc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.946457 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 14:49:10 crc kubenswrapper[4823]: I0126 14:49:10.987210 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 14:49:10 crc kubenswrapper[4823]: W0126 14:49:10.997064 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-0e9726f973aaf1a40b1489f21a3d5263df1d8d2dcf820d4048995670123be48d WatchSource:0}: Error finding container 0e9726f973aaf1a40b1489f21a3d5263df1d8d2dcf820d4048995670123be48d: Status 404 returned error can't find the container with id 0e9726f973aaf1a40b1489f21a3d5263df1d8d2dcf820d4048995670123be48d Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.007009 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.026003 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.045248 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.065958 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.085160 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.105516 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.125626 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.146243 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.165615 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.185147 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.205405 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.226386 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.245813 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.266457 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.286023 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.305818 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.326873 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.345745 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.364005 4823 request.go:700] Waited for 1.008386897s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcatalog-operator-serving-cert&limit=500&resourceVersion=0 Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.365980 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.385855 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.406564 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.425984 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.446090 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.465983 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.491163 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.505038 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.526550 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.545793 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.565281 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.585857 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.605531 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.625225 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.645755 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.665207 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.685529 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.705869 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.724994 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.745316 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.765114 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.785527 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.805562 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.825177 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.845956 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.862925 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0e9726f973aaf1a40b1489f21a3d5263df1d8d2dcf820d4048995670123be48d"} Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.864189 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"023d269fdf389d12f45ed09f142887f3b03e129fab0980c8d5dc598ce03e9262"} Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.865135 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.885987 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.906686 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.926797 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.945994 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.967307 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 14:49:11 crc kubenswrapper[4823]: I0126 14:49:11.987095 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.006555 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.026347 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.068440 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xt69d\" (UniqueName: \"kubernetes.io/projected/d88c9c1d-3f83-4a0a-b996-a012f7a0dd36-kube-api-access-xt69d\") pod \"authentication-operator-69f744f599-v7zhj\" (UID: \"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.083687 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glbdl\" (UniqueName: \"kubernetes.io/projected/7eea18e5-bc89-4c10-a843-c8b374a239a2-kube-api-access-glbdl\") pod \"controller-manager-879f6c89f-5hr6d\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.106182 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxttg\" (UniqueName: \"kubernetes.io/projected/3e8743ea-e343-4d36-8d1f-645b59c9a7fd-kube-api-access-zxttg\") pod \"apiserver-7bbb656c7d-mtvt4\" (UID: \"3e8743ea-e343-4d36-8d1f-645b59c9a7fd\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.124171 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmvsl\" (UniqueName: \"kubernetes.io/projected/d241e4dc-13ab-49d4-99a0-9fa3d654cb0f-kube-api-access-gmvsl\") pod \"console-operator-58897d9998-sd92l\" (UID: \"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f\") " pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.142653 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4wqp\" (UniqueName: \"kubernetes.io/projected/c2f8927c-1301-492d-ae9a-487ec70b3038-kube-api-access-f4wqp\") pod \"route-controller-manager-6576b87f9c-tdvm4\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.163876 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-542g6\" (UniqueName: \"kubernetes.io/projected/b942d06c-fac8-4546-98a6-f36d0666d0d4-kube-api-access-542g6\") pod \"oauth-openshift-558db77b4-6vd9x\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.176744 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.190241 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dqgk\" (UniqueName: \"kubernetes.io/projected/b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31-kube-api-access-2dqgk\") pod \"apiserver-76f77b778f-d8kxw\" (UID: \"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31\") " pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.199106 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rd8n\" (UniqueName: \"kubernetes.io/projected/00f5968d-4a95-44bc-9633-0bb7844b3bfb-kube-api-access-7rd8n\") pod \"dns-operator-744455d44c-z4f2q\" (UID: \"00f5968d-4a95-44bc-9633-0bb7844b3bfb\") " pod="openshift-dns-operator/dns-operator-744455d44c-z4f2q" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.212526 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-z4f2q" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.221178 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcdqx\" (UniqueName: \"kubernetes.io/projected/204a2df3-b8d7-4998-8ff1-3c3a6112c666-kube-api-access-pcdqx\") pod \"machine-api-operator-5694c8668f-4m2g6\" (UID: \"204a2df3-b8d7-4998-8ff1-3c3a6112c666\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.243744 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljl56\" (UniqueName: \"kubernetes.io/projected/6a33769f-089d-482c-bb7c-5569c4a078a7-kube-api-access-ljl56\") pod \"openshift-apiserver-operator-796bbdcf4f-4kchq\" (UID: \"6a33769f-089d-482c-bb7c-5569c4a078a7\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.263991 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjnq4\" (UniqueName: \"kubernetes.io/projected/11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a-kube-api-access-mjnq4\") pod \"openshift-config-operator-7777fb866f-cdl85\" (UID: \"11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.280466 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcxfc\" (UniqueName: \"kubernetes.io/projected/4609bcb4-b5ef-43fa-85be-2d897f635951-kube-api-access-kcxfc\") pod \"downloads-7954f5f757-5b7zm\" (UID: \"4609bcb4-b5ef-43fa-85be-2d897f635951\") " pod="openshift-console/downloads-7954f5f757-5b7zm" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.293450 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.301854 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttvsc\" (UniqueName: \"kubernetes.io/projected/ab42db39-920c-4bd5-b524-d3c649e24f67-kube-api-access-ttvsc\") pod \"machine-approver-56656f9798-rww8p\" (UID: \"ab42db39-920c-4bd5-b524-d3c649e24f67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.307311 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.324404 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.325816 4823 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.329438 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.345992 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.346158 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.361274 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-sd92l"] Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.364050 4823 request.go:700] Waited for 1.869197261s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/configmaps?fieldSelector=metadata.name%3Ddns-default&limit=500&resourceVersion=0 Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.365996 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.372845 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.379695 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.386070 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.406117 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.409973 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-z4f2q"] Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.423636 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.425788 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 14:49:12 crc kubenswrapper[4823]: W0126 14:49:12.444920 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00f5968d_4a95_44bc_9633_0bb7844b3bfb.slice/crio-d2c92bbd6bc1ddc662e40b045451c2dbd93672926bb90a7f5ca32b6076333ddd WatchSource:0}: Error finding container d2c92bbd6bc1ddc662e40b045451c2dbd93672926bb90a7f5ca32b6076333ddd: Status 404 returned error can't find the container with id d2c92bbd6bc1ddc662e40b045451c2dbd93672926bb90a7f5ca32b6076333ddd Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.446473 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.450266 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.463142 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.466128 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.486471 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.490822 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.529927 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t95kb\" (UniqueName: \"kubernetes.io/projected/90fe06d2-db90-4559-855a-18be3ede4ad5-kube-api-access-t95kb\") pod \"openshift-controller-manager-operator-756b6f6bc6-l9mhl\" (UID: \"90fe06d2-db90-4559-855a-18be3ede4ad5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.532556 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5b7zm" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.543780 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1fce63dc-472e-4a08-b2c0-0228c9f41cc4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-dsmld\" (UID: \"1fce63dc-472e-4a08-b2c0-0228c9f41cc4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.565672 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xtvs\" (UniqueName: \"kubernetes.io/projected/18f7273c-10d0-4c81-878f-d2ac07b0fb63-kube-api-access-4xtvs\") pod \"router-default-5444994796-p7srw\" (UID: \"18f7273c-10d0-4c81-878f-d2ac07b0fb63\") " pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.590898 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d8kxw"] Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.592966 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kmzm\" (UniqueName: \"kubernetes.io/projected/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-kube-api-access-8kmzm\") pod \"console-f9d7485db-bbxp2\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.593166 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.607809 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kp4gb\" (UID: \"23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.624995 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.637164 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.647258 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f066caa-2e70-4ef1-ae84-da5b204e0d25-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-llt2m\" (UID: \"0f066caa-2e70-4ef1-ae84-da5b204e0d25\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.655561 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5hr6d"] Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.665045 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qcz2\" (UniqueName: \"kubernetes.io/projected/1fce63dc-472e-4a08-b2c0-0228c9f41cc4-kube-api-access-7qcz2\") pod \"ingress-operator-5b745b69d9-dsmld\" (UID: \"1fce63dc-472e-4a08-b2c0-0228c9f41cc4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.667646 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/63b17ba3-023d-46ef-9b4e-1936166074bc-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zq4h\" (UID: \"63b17ba3-023d-46ef-9b4e-1936166074bc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.676755 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.677683 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" Jan 26 14:49:12 crc kubenswrapper[4823]: W0126 14:49:12.746331 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7eea18e5_bc89_4c10_a843_c8b374a239a2.slice/crio-302938f753799ff5b91d7742a2ba0c225c306102cf51619c5714375db5e8559d WatchSource:0}: Error finding container 302938f753799ff5b91d7742a2ba0c225c306102cf51619c5714375db5e8559d: Status 404 returned error can't find the container with id 302938f753799ff5b91d7742a2ba0c225c306102cf51619c5714375db5e8559d Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790329 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/562f489c-010a-4bcf-9db6-524717e4c0eb-proxy-tls\") pod \"machine-config-controller-84d6567774-w6x5c\" (UID: \"562f489c-010a-4bcf-9db6-524717e4c0eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790385 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj4r7\" (UniqueName: \"kubernetes.io/projected/e753db28-0960-4c2a-bd93-00e8cd25ad61-kube-api-access-xj4r7\") pod \"packageserver-d55dfcdfc-5zb4r\" (UID: \"e753db28-0960-4c2a-bd93-00e8cd25ad61\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790405 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmklh\" (UniqueName: \"kubernetes.io/projected/a9bfdebe-6e6f-4a2c-baee-e339a0b4048d-kube-api-access-tmklh\") pod \"olm-operator-6b444d44fb-ngqjw\" (UID: \"a9bfdebe-6e6f-4a2c-baee-e339a0b4048d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790423 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrhw\" (UniqueName: \"kubernetes.io/projected/b8fcd1f9-ed8a-4659-889b-0ac463f9962d-kube-api-access-mnrhw\") pod \"service-ca-operator-777779d784-g9nns\" (UID: \"b8fcd1f9-ed8a-4659-889b-0ac463f9962d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790444 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-socket-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790461 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-config-volume\") pod \"collect-profiles-29490645-7md54\" (UID: \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790506 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sx2g\" (UniqueName: \"kubernetes.io/projected/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-kube-api-access-7sx2g\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790529 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f5fc5f2-5f01-40fa-85ad-1f98835115dc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vnwd2\" (UID: \"6f5fc5f2-5f01-40fa-85ad-1f98835115dc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790569 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr2p9\" (UniqueName: \"kubernetes.io/projected/9d12dc0b-ae5f-40a1-b3b0-59dfbec22317-kube-api-access-fr2p9\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wzd\" (UID: \"9d12dc0b-ae5f-40a1-b3b0-59dfbec22317\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790589 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ggmf\" (UniqueName: \"kubernetes.io/projected/c5857fd5-1c26-4ffd-a779-df738b7ad0b9-kube-api-access-8ggmf\") pod \"migrator-59844c95c7-nnkhw\" (UID: \"c5857fd5-1c26-4ffd-a779-df738b7ad0b9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nnkhw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790630 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aff73130-88e7-4a8b-9b78-9af559e12a71-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790648 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-plugins-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790666 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgd2n\" (UniqueName: \"kubernetes.io/projected/029bd494-0ffa-4390-995e-bb26fdbbfbe7-kube-api-access-vgd2n\") pod \"dns-default-zhskc\" (UID: \"029bd494-0ffa-4390-995e-bb26fdbbfbe7\") " pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790683 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-serving-cert\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790733 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-registration-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790767 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rcth\" (UniqueName: \"kubernetes.io/projected/001d6d03-e3da-4ee8-ae26-68e1775403fc-kube-api-access-2rcth\") pod \"cluster-samples-operator-665b6dd947-z44wh\" (UID: \"001d6d03-e3da-4ee8-ae26-68e1775403fc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790785 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b61dff80-b5ca-454b-ae88-f45d20097560-proxy-tls\") pod \"machine-config-operator-74547568cd-gtqp8\" (UID: \"b61dff80-b5ca-454b-ae88-f45d20097560\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790804 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn5vs\" (UniqueName: \"kubernetes.io/projected/99e64d4c-8fe7-4eec-ad1d-c10d740fccbb-kube-api-access-mn5vs\") pod \"package-server-manager-789f6589d5-qs9mc\" (UID: \"99e64d4c-8fe7-4eec-ad1d-c10d740fccbb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790823 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/001d6d03-e3da-4ee8-ae26-68e1775403fc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z44wh\" (UID: \"001d6d03-e3da-4ee8-ae26-68e1775403fc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790841 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-mountpoint-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790859 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmszb\" (UniqueName: \"kubernetes.io/projected/010c3f80-32bc-4a56-b1e9-7503e757192f-kube-api-access-bmszb\") pod \"control-plane-machine-set-operator-78cbb6b69f-fhzvg\" (UID: \"010c3f80-32bc-4a56-b1e9-7503e757192f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790888 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942-signing-cabundle\") pod \"service-ca-9c57cc56f-km977\" (UID: \"7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942\") " pod="openshift-service-ca/service-ca-9c57cc56f-km977" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790936 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9bfdebe-6e6f-4a2c-baee-e339a0b4048d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ngqjw\" (UID: \"a9bfdebe-6e6f-4a2c-baee-e339a0b4048d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790965 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b15ca477-8221-4895-bd12-dbd5cfd1bfa9-cert\") pod \"ingress-canary-c5cgw\" (UID: \"b15ca477-8221-4895-bd12-dbd5cfd1bfa9\") " pod="openshift-ingress-canary/ingress-canary-c5cgw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.790993 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-registry-tls\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791011 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e753db28-0960-4c2a-bd93-00e8cd25ad61-apiservice-cert\") pod \"packageserver-d55dfcdfc-5zb4r\" (UID: \"e753db28-0960-4c2a-bd93-00e8cd25ad61\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791043 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-etcd-service-ca\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791061 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/029bd494-0ffa-4390-995e-bb26fdbbfbe7-config-volume\") pod \"dns-default-zhskc\" (UID: \"029bd494-0ffa-4390-995e-bb26fdbbfbe7\") " pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791104 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrhqq\" (UniqueName: \"kubernetes.io/projected/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-kube-api-access-qrhqq\") pod \"marketplace-operator-79b997595-m7qhz\" (UID: \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791123 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942-signing-key\") pod \"service-ca-9c57cc56f-km977\" (UID: \"7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942\") " pod="openshift-service-ca/service-ca-9c57cc56f-km977" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791142 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b61dff80-b5ca-454b-ae88-f45d20097560-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gtqp8\" (UID: \"b61dff80-b5ca-454b-ae88-f45d20097560\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791160 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8fcd1f9-ed8a-4659-889b-0ac463f9962d-serving-cert\") pod \"service-ca-operator-777779d784-g9nns\" (UID: \"b8fcd1f9-ed8a-4659-889b-0ac463f9962d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791178 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/21e659aa-cc22-45fa-ac47-73de9aab039d-certs\") pod \"machine-config-server-v7ff8\" (UID: \"21e659aa-cc22-45fa-ac47-73de9aab039d\") " pod="openshift-machine-config-operator/machine-config-server-v7ff8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791198 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/562f489c-010a-4bcf-9db6-524717e4c0eb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-w6x5c\" (UID: \"562f489c-010a-4bcf-9db6-524717e4c0eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791255 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d12dc0b-ae5f-40a1-b3b0-59dfbec22317-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wzd\" (UID: \"9d12dc0b-ae5f-40a1-b3b0-59dfbec22317\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791318 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf2km\" (UniqueName: \"kubernetes.io/projected/4f705cc6-53c8-4781-b33b-d0e5a386a22d-kube-api-access-hf2km\") pod \"multus-admission-controller-857f4d67dd-clfjm\" (UID: \"4f705cc6-53c8-4781-b33b-d0e5a386a22d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-clfjm" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791349 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct44j\" (UniqueName: \"kubernetes.io/projected/b61dff80-b5ca-454b-ae88-f45d20097560-kube-api-access-ct44j\") pod \"machine-config-operator-74547568cd-gtqp8\" (UID: \"b61dff80-b5ca-454b-ae88-f45d20097560\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791388 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/21e659aa-cc22-45fa-ac47-73de9aab039d-node-bootstrap-token\") pod \"machine-config-server-v7ff8\" (UID: \"21e659aa-cc22-45fa-ac47-73de9aab039d\") " pod="openshift-machine-config-operator/machine-config-server-v7ff8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791420 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d12dc0b-ae5f-40a1-b3b0-59dfbec22317-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wzd\" (UID: \"9d12dc0b-ae5f-40a1-b3b0-59dfbec22317\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791464 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aff73130-88e7-4a8b-9b78-9af559e12a71-trusted-ca\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791517 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-926mt\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-kube-api-access-926mt\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791543 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxdrw\" (UniqueName: \"kubernetes.io/projected/6f5fc5f2-5f01-40fa-85ad-1f98835115dc-kube-api-access-rxdrw\") pod \"cluster-image-registry-operator-dc59b4c8b-vnwd2\" (UID: \"6f5fc5f2-5f01-40fa-85ad-1f98835115dc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791562 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-etcd-client\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791589 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k4c6\" (UniqueName: \"kubernetes.io/projected/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-kube-api-access-2k4c6\") pod \"collect-profiles-29490645-7md54\" (UID: \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791655 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f5fc5f2-5f01-40fa-85ad-1f98835115dc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vnwd2\" (UID: \"6f5fc5f2-5f01-40fa-85ad-1f98835115dc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791673 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e753db28-0960-4c2a-bd93-00e8cd25ad61-webhook-cert\") pod \"packageserver-d55dfcdfc-5zb4r\" (UID: \"e753db28-0960-4c2a-bd93-00e8cd25ad61\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791727 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aff73130-88e7-4a8b-9b78-9af559e12a71-registry-certificates\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791753 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/029bd494-0ffa-4390-995e-bb26fdbbfbe7-metrics-tls\") pod \"dns-default-zhskc\" (UID: \"029bd494-0ffa-4390-995e-bb26fdbbfbe7\") " pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791773 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f705cc6-53c8-4781-b33b-d0e5a386a22d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-clfjm\" (UID: \"4f705cc6-53c8-4781-b33b-d0e5a386a22d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-clfjm" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791798 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g25f\" (UniqueName: \"kubernetes.io/projected/b15ca477-8221-4895-bd12-dbd5cfd1bfa9-kube-api-access-5g25f\") pod \"ingress-canary-c5cgw\" (UID: \"b15ca477-8221-4895-bd12-dbd5cfd1bfa9\") " pod="openshift-ingress-canary/ingress-canary-c5cgw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791830 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-bound-sa-token\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791849 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms6nc\" (UniqueName: \"kubernetes.io/projected/21e659aa-cc22-45fa-ac47-73de9aab039d-kube-api-access-ms6nc\") pod \"machine-config-server-v7ff8\" (UID: \"21e659aa-cc22-45fa-ac47-73de9aab039d\") " pod="openshift-machine-config-operator/machine-config-server-v7ff8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791925 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8fcd1f9-ed8a-4659-889b-0ac463f9962d-config\") pod \"service-ca-operator-777779d784-g9nns\" (UID: \"b8fcd1f9-ed8a-4659-889b-0ac463f9962d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791949 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/99e64d4c-8fe7-4eec-ad1d-c10d740fccbb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qs9mc\" (UID: \"99e64d4c-8fe7-4eec-ad1d-c10d740fccbb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.791967 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-secret-volume\") pod \"collect-profiles-29490645-7md54\" (UID: \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.792001 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-etcd-ca\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.792059 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxj99\" (UniqueName: \"kubernetes.io/projected/189c2b61-53a3-4182-b251-2b8e6feddbcf-kube-api-access-jxj99\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.792092 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9bfdebe-6e6f-4a2c-baee-e339a0b4048d-srv-cert\") pod \"olm-operator-6b444d44fb-ngqjw\" (UID: \"a9bfdebe-6e6f-4a2c-baee-e339a0b4048d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.792113 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh27r\" (UniqueName: \"kubernetes.io/projected/562f489c-010a-4bcf-9db6-524717e4c0eb-kube-api-access-zh27r\") pod \"machine-config-controller-84d6567774-w6x5c\" (UID: \"562f489c-010a-4bcf-9db6-524717e4c0eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.792146 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.792168 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-config\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.792190 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b61dff80-b5ca-454b-ae88-f45d20097560-images\") pod \"machine-config-operator-74547568cd-gtqp8\" (UID: \"b61dff80-b5ca-454b-ae88-f45d20097560\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.792225 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f5fc5f2-5f01-40fa-85ad-1f98835115dc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vnwd2\" (UID: \"6f5fc5f2-5f01-40fa-85ad-1f98835115dc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.792243 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m7qhz\" (UID: \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.792276 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aff73130-88e7-4a8b-9b78-9af559e12a71-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.792322 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e753db28-0960-4c2a-bd93-00e8cd25ad61-tmpfs\") pod \"packageserver-d55dfcdfc-5zb4r\" (UID: \"e753db28-0960-4c2a-bd93-00e8cd25ad61\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:12 crc kubenswrapper[4823]: E0126 14:49:12.809445 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:13.309414919 +0000 UTC m=+149.994878024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.812920 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-csi-data-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: W0126 14:49:12.812991 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18f7273c_10d0_4c81_878f_d2ac07b0fb63.slice/crio-513ad7afa251e089e98a65583414b489bdf076f62c1be39c6b27e6fb72e293bc WatchSource:0}: Error finding container 513ad7afa251e089e98a65583414b489bdf076f62c1be39c6b27e6fb72e293bc: Status 404 returned error can't find the container with id 513ad7afa251e089e98a65583414b489bdf076f62c1be39c6b27e6fb72e293bc Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.813157 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/010c3f80-32bc-4a56-b1e9-7503e757192f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fhzvg\" (UID: \"010c3f80-32bc-4a56-b1e9-7503e757192f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.824083 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vx8q\" (UniqueName: \"kubernetes.io/projected/7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942-kube-api-access-2vx8q\") pod \"service-ca-9c57cc56f-km977\" (UID: \"7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942\") " pod="openshift-service-ca/service-ca-9c57cc56f-km977" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.824151 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m7qhz\" (UID: \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.896244 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" event={"ID":"ab42db39-920c-4bd5-b524-d3c649e24f67","Type":"ContainerStarted","Data":"00dd0c5a21a768506ea82f890bfbeca1e177c11fba77887d1a4e24909f9868c3"} Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.914726 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.924947 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925250 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sx2g\" (UniqueName: \"kubernetes.io/projected/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-kube-api-access-7sx2g\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925293 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f5fc5f2-5f01-40fa-85ad-1f98835115dc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vnwd2\" (UID: \"6f5fc5f2-5f01-40fa-85ad-1f98835115dc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925339 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr2p9\" (UniqueName: \"kubernetes.io/projected/9d12dc0b-ae5f-40a1-b3b0-59dfbec22317-kube-api-access-fr2p9\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wzd\" (UID: \"9d12dc0b-ae5f-40a1-b3b0-59dfbec22317\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925382 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ggmf\" (UniqueName: \"kubernetes.io/projected/c5857fd5-1c26-4ffd-a779-df738b7ad0b9-kube-api-access-8ggmf\") pod \"migrator-59844c95c7-nnkhw\" (UID: \"c5857fd5-1c26-4ffd-a779-df738b7ad0b9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nnkhw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925406 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aff73130-88e7-4a8b-9b78-9af559e12a71-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925431 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-plugins-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925450 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgd2n\" (UniqueName: \"kubernetes.io/projected/029bd494-0ffa-4390-995e-bb26fdbbfbe7-kube-api-access-vgd2n\") pod \"dns-default-zhskc\" (UID: \"029bd494-0ffa-4390-995e-bb26fdbbfbe7\") " pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925471 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-serving-cert\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925533 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-registration-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925557 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn5vs\" (UniqueName: \"kubernetes.io/projected/99e64d4c-8fe7-4eec-ad1d-c10d740fccbb-kube-api-access-mn5vs\") pod \"package-server-manager-789f6589d5-qs9mc\" (UID: \"99e64d4c-8fe7-4eec-ad1d-c10d740fccbb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925584 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rcth\" (UniqueName: \"kubernetes.io/projected/001d6d03-e3da-4ee8-ae26-68e1775403fc-kube-api-access-2rcth\") pod \"cluster-samples-operator-665b6dd947-z44wh\" (UID: \"001d6d03-e3da-4ee8-ae26-68e1775403fc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925608 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b61dff80-b5ca-454b-ae88-f45d20097560-proxy-tls\") pod \"machine-config-operator-74547568cd-gtqp8\" (UID: \"b61dff80-b5ca-454b-ae88-f45d20097560\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925631 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/001d6d03-e3da-4ee8-ae26-68e1775403fc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z44wh\" (UID: \"001d6d03-e3da-4ee8-ae26-68e1775403fc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925656 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-mountpoint-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925679 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmszb\" (UniqueName: \"kubernetes.io/projected/010c3f80-32bc-4a56-b1e9-7503e757192f-kube-api-access-bmszb\") pod \"control-plane-machine-set-operator-78cbb6b69f-fhzvg\" (UID: \"010c3f80-32bc-4a56-b1e9-7503e757192f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925705 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942-signing-cabundle\") pod \"service-ca-9c57cc56f-km977\" (UID: \"7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942\") " pod="openshift-service-ca/service-ca-9c57cc56f-km977" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925729 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9bfdebe-6e6f-4a2c-baee-e339a0b4048d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ngqjw\" (UID: \"a9bfdebe-6e6f-4a2c-baee-e339a0b4048d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925749 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b15ca477-8221-4895-bd12-dbd5cfd1bfa9-cert\") pod \"ingress-canary-c5cgw\" (UID: \"b15ca477-8221-4895-bd12-dbd5cfd1bfa9\") " pod="openshift-ingress-canary/ingress-canary-c5cgw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925773 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-registry-tls\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925795 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e753db28-0960-4c2a-bd93-00e8cd25ad61-apiservice-cert\") pod \"packageserver-d55dfcdfc-5zb4r\" (UID: \"e753db28-0960-4c2a-bd93-00e8cd25ad61\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925813 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/029bd494-0ffa-4390-995e-bb26fdbbfbe7-config-volume\") pod \"dns-default-zhskc\" (UID: \"029bd494-0ffa-4390-995e-bb26fdbbfbe7\") " pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925834 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-etcd-service-ca\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925852 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrhqq\" (UniqueName: \"kubernetes.io/projected/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-kube-api-access-qrhqq\") pod \"marketplace-operator-79b997595-m7qhz\" (UID: \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925869 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942-signing-key\") pod \"service-ca-9c57cc56f-km977\" (UID: \"7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942\") " pod="openshift-service-ca/service-ca-9c57cc56f-km977" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925887 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b61dff80-b5ca-454b-ae88-f45d20097560-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gtqp8\" (UID: \"b61dff80-b5ca-454b-ae88-f45d20097560\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925903 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8fcd1f9-ed8a-4659-889b-0ac463f9962d-serving-cert\") pod \"service-ca-operator-777779d784-g9nns\" (UID: \"b8fcd1f9-ed8a-4659-889b-0ac463f9962d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925920 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/21e659aa-cc22-45fa-ac47-73de9aab039d-certs\") pod \"machine-config-server-v7ff8\" (UID: \"21e659aa-cc22-45fa-ac47-73de9aab039d\") " pod="openshift-machine-config-operator/machine-config-server-v7ff8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925938 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/562f489c-010a-4bcf-9db6-524717e4c0eb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-w6x5c\" (UID: \"562f489c-010a-4bcf-9db6-524717e4c0eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925955 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d12dc0b-ae5f-40a1-b3b0-59dfbec22317-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wzd\" (UID: \"9d12dc0b-ae5f-40a1-b3b0-59dfbec22317\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925976 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6eaabec-1376-4e26-898a-70d39fad7903-srv-cert\") pod \"catalog-operator-68c6474976-n5rd7\" (UID: \"b6eaabec-1376-4e26-898a-70d39fad7903\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.925996 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf2km\" (UniqueName: \"kubernetes.io/projected/4f705cc6-53c8-4781-b33b-d0e5a386a22d-kube-api-access-hf2km\") pod \"multus-admission-controller-857f4d67dd-clfjm\" (UID: \"4f705cc6-53c8-4781-b33b-d0e5a386a22d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-clfjm" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926012 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d12dc0b-ae5f-40a1-b3b0-59dfbec22317-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wzd\" (UID: \"9d12dc0b-ae5f-40a1-b3b0-59dfbec22317\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926031 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct44j\" (UniqueName: \"kubernetes.io/projected/b61dff80-b5ca-454b-ae88-f45d20097560-kube-api-access-ct44j\") pod \"machine-config-operator-74547568cd-gtqp8\" (UID: \"b61dff80-b5ca-454b-ae88-f45d20097560\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926047 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/21e659aa-cc22-45fa-ac47-73de9aab039d-node-bootstrap-token\") pod \"machine-config-server-v7ff8\" (UID: \"21e659aa-cc22-45fa-ac47-73de9aab039d\") " pod="openshift-machine-config-operator/machine-config-server-v7ff8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926062 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aff73130-88e7-4a8b-9b78-9af559e12a71-trusted-ca\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926080 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-926mt\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-kube-api-access-926mt\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926096 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxdrw\" (UniqueName: \"kubernetes.io/projected/6f5fc5f2-5f01-40fa-85ad-1f98835115dc-kube-api-access-rxdrw\") pod \"cluster-image-registry-operator-dc59b4c8b-vnwd2\" (UID: \"6f5fc5f2-5f01-40fa-85ad-1f98835115dc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926116 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-etcd-client\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926141 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k4c6\" (UniqueName: \"kubernetes.io/projected/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-kube-api-access-2k4c6\") pod \"collect-profiles-29490645-7md54\" (UID: \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926167 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f5fc5f2-5f01-40fa-85ad-1f98835115dc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vnwd2\" (UID: \"6f5fc5f2-5f01-40fa-85ad-1f98835115dc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926184 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e753db28-0960-4c2a-bd93-00e8cd25ad61-webhook-cert\") pod \"packageserver-d55dfcdfc-5zb4r\" (UID: \"e753db28-0960-4c2a-bd93-00e8cd25ad61\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926200 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aff73130-88e7-4a8b-9b78-9af559e12a71-registry-certificates\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926218 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/029bd494-0ffa-4390-995e-bb26fdbbfbe7-metrics-tls\") pod \"dns-default-zhskc\" (UID: \"029bd494-0ffa-4390-995e-bb26fdbbfbe7\") " pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926237 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f705cc6-53c8-4781-b33b-d0e5a386a22d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-clfjm\" (UID: \"4f705cc6-53c8-4781-b33b-d0e5a386a22d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-clfjm" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926253 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g25f\" (UniqueName: \"kubernetes.io/projected/b15ca477-8221-4895-bd12-dbd5cfd1bfa9-kube-api-access-5g25f\") pod \"ingress-canary-c5cgw\" (UID: \"b15ca477-8221-4895-bd12-dbd5cfd1bfa9\") " pod="openshift-ingress-canary/ingress-canary-c5cgw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926321 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-bound-sa-token\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926586 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6nc\" (UniqueName: \"kubernetes.io/projected/21e659aa-cc22-45fa-ac47-73de9aab039d-kube-api-access-ms6nc\") pod \"machine-config-server-v7ff8\" (UID: \"21e659aa-cc22-45fa-ac47-73de9aab039d\") " pod="openshift-machine-config-operator/machine-config-server-v7ff8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926615 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8fcd1f9-ed8a-4659-889b-0ac463f9962d-config\") pod \"service-ca-operator-777779d784-g9nns\" (UID: \"b8fcd1f9-ed8a-4659-889b-0ac463f9962d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926634 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/99e64d4c-8fe7-4eec-ad1d-c10d740fccbb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qs9mc\" (UID: \"99e64d4c-8fe7-4eec-ad1d-c10d740fccbb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926653 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-secret-volume\") pod \"collect-profiles-29490645-7md54\" (UID: \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926674 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-etcd-ca\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926698 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxj99\" (UniqueName: \"kubernetes.io/projected/189c2b61-53a3-4182-b251-2b8e6feddbcf-kube-api-access-jxj99\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926725 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9bfdebe-6e6f-4a2c-baee-e339a0b4048d-srv-cert\") pod \"olm-operator-6b444d44fb-ngqjw\" (UID: \"a9bfdebe-6e6f-4a2c-baee-e339a0b4048d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926747 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh27r\" (UniqueName: \"kubernetes.io/projected/562f489c-010a-4bcf-9db6-524717e4c0eb-kube-api-access-zh27r\") pod \"machine-config-controller-84d6567774-w6x5c\" (UID: \"562f489c-010a-4bcf-9db6-524717e4c0eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926772 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-config\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926789 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b61dff80-b5ca-454b-ae88-f45d20097560-images\") pod \"machine-config-operator-74547568cd-gtqp8\" (UID: \"b61dff80-b5ca-454b-ae88-f45d20097560\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926808 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f5fc5f2-5f01-40fa-85ad-1f98835115dc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vnwd2\" (UID: \"6f5fc5f2-5f01-40fa-85ad-1f98835115dc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926824 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m7qhz\" (UID: \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926862 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aff73130-88e7-4a8b-9b78-9af559e12a71-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926879 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e753db28-0960-4c2a-bd93-00e8cd25ad61-tmpfs\") pod \"packageserver-d55dfcdfc-5zb4r\" (UID: \"e753db28-0960-4c2a-bd93-00e8cd25ad61\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926897 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-csi-data-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926915 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/010c3f80-32bc-4a56-b1e9-7503e757192f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fhzvg\" (UID: \"010c3f80-32bc-4a56-b1e9-7503e757192f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926933 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8jss\" (UniqueName: \"kubernetes.io/projected/b6eaabec-1376-4e26-898a-70d39fad7903-kube-api-access-x8jss\") pod \"catalog-operator-68c6474976-n5rd7\" (UID: \"b6eaabec-1376-4e26-898a-70d39fad7903\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926951 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vx8q\" (UniqueName: \"kubernetes.io/projected/7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942-kube-api-access-2vx8q\") pod \"service-ca-9c57cc56f-km977\" (UID: \"7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942\") " pod="openshift-service-ca/service-ca-9c57cc56f-km977" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926969 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m7qhz\" (UID: \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.926987 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/562f489c-010a-4bcf-9db6-524717e4c0eb-proxy-tls\") pod \"machine-config-controller-84d6567774-w6x5c\" (UID: \"562f489c-010a-4bcf-9db6-524717e4c0eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.927002 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj4r7\" (UniqueName: \"kubernetes.io/projected/e753db28-0960-4c2a-bd93-00e8cd25ad61-kube-api-access-xj4r7\") pod \"packageserver-d55dfcdfc-5zb4r\" (UID: \"e753db28-0960-4c2a-bd93-00e8cd25ad61\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.927017 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6eaabec-1376-4e26-898a-70d39fad7903-profile-collector-cert\") pod \"catalog-operator-68c6474976-n5rd7\" (UID: \"b6eaabec-1376-4e26-898a-70d39fad7903\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.927037 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmklh\" (UniqueName: \"kubernetes.io/projected/a9bfdebe-6e6f-4a2c-baee-e339a0b4048d-kube-api-access-tmklh\") pod \"olm-operator-6b444d44fb-ngqjw\" (UID: \"a9bfdebe-6e6f-4a2c-baee-e339a0b4048d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.927052 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-config-volume\") pod \"collect-profiles-29490645-7md54\" (UID: \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.927079 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrhw\" (UniqueName: \"kubernetes.io/projected/b8fcd1f9-ed8a-4659-889b-0ac463f9962d-kube-api-access-mnrhw\") pod \"service-ca-operator-777779d784-g9nns\" (UID: \"b8fcd1f9-ed8a-4659-889b-0ac463f9962d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.927095 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-socket-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.927438 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-socket-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: E0126 14:49:12.927567 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:13.427503353 +0000 UTC m=+150.112966458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.932837 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-etcd-service-ca\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.932901 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-plugins-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.933755 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d12dc0b-ae5f-40a1-b3b0-59dfbec22317-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wzd\" (UID: \"9d12dc0b-ae5f-40a1-b3b0-59dfbec22317\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.935330 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b61dff80-b5ca-454b-ae88-f45d20097560-auth-proxy-config\") pod \"machine-config-operator-74547568cd-gtqp8\" (UID: \"b61dff80-b5ca-454b-ae88-f45d20097560\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.935441 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-registration-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.935551 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b61dff80-b5ca-454b-ae88-f45d20097560-images\") pod \"machine-config-operator-74547568cd-gtqp8\" (UID: \"b61dff80-b5ca-454b-ae88-f45d20097560\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.936625 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/562f489c-010a-4bcf-9db6-524717e4c0eb-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-w6x5c\" (UID: \"562f489c-010a-4bcf-9db6-524717e4c0eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.939180 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-mountpoint-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.940171 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8fcd1f9-ed8a-4659-889b-0ac463f9962d-config\") pod \"service-ca-operator-777779d784-g9nns\" (UID: \"b8fcd1f9-ed8a-4659-889b-0ac463f9962d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.941586 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e753db28-0960-4c2a-bd93-00e8cd25ad61-tmpfs\") pod \"packageserver-d55dfcdfc-5zb4r\" (UID: \"e753db28-0960-4c2a-bd93-00e8cd25ad61\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.941766 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aff73130-88e7-4a8b-9b78-9af559e12a71-ca-trust-extracted\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.941895 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942-signing-cabundle\") pod \"service-ca-9c57cc56f-km977\" (UID: \"7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942\") " pod="openshift-service-ca/service-ca-9c57cc56f-km977" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.941947 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6f5fc5f2-5f01-40fa-85ad-1f98835115dc-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-vnwd2\" (UID: \"6f5fc5f2-5f01-40fa-85ad-1f98835115dc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.941973 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/189c2b61-53a3-4182-b251-2b8e6feddbcf-csi-data-dir\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.942796 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aff73130-88e7-4a8b-9b78-9af559e12a71-trusted-ca\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.942973 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.943851 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aff73130-88e7-4a8b-9b78-9af559e12a71-installation-pull-secrets\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.943852 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-registry-tls\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.945472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/029bd494-0ffa-4390-995e-bb26fdbbfbe7-metrics-tls\") pod \"dns-default-zhskc\" (UID: \"029bd494-0ffa-4390-995e-bb26fdbbfbe7\") " pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.949226 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aff73130-88e7-4a8b-9b78-9af559e12a71-registry-certificates\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.950549 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"4b33441d50f4bbea8e42dabc2dc33f06e0f111a6400d9edea62ead27f75c0576"} Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.952988 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-config\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.953096 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-config-volume\") pod \"collect-profiles-29490645-7md54\" (UID: \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.953227 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/029bd494-0ffa-4390-995e-bb26fdbbfbe7-config-volume\") pod \"dns-default-zhskc\" (UID: \"029bd494-0ffa-4390-995e-bb26fdbbfbe7\") " pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.953781 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-etcd-ca\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.953947 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/010c3f80-32bc-4a56-b1e9-7503e757192f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-fhzvg\" (UID: \"010c3f80-32bc-4a56-b1e9-7503e757192f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.954417 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-m7qhz\" (UID: \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.956609 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a9bfdebe-6e6f-4a2c-baee-e339a0b4048d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-ngqjw\" (UID: \"a9bfdebe-6e6f-4a2c-baee-e339a0b4048d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.957008 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e753db28-0960-4c2a-bd93-00e8cd25ad61-webhook-cert\") pod \"packageserver-d55dfcdfc-5zb4r\" (UID: \"e753db28-0960-4c2a-bd93-00e8cd25ad61\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.957682 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"89062b6df1233b09dfa69fb5eb20bc8686b2b31bb109e4b85f3ffee8093e8f77"} Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.963827 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4"] Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.967003 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sx2g\" (UniqueName: \"kubernetes.io/projected/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-kube-api-access-7sx2g\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.967098 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942-signing-key\") pod \"service-ca-9c57cc56f-km977\" (UID: \"7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942\") " pod="openshift-service-ca/service-ca-9c57cc56f-km977" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.967120 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e753db28-0960-4c2a-bd93-00e8cd25ad61-apiservice-cert\") pod \"packageserver-d55dfcdfc-5zb4r\" (UID: \"e753db28-0960-4c2a-bd93-00e8cd25ad61\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.974086 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f705cc6-53c8-4781-b33b-d0e5a386a22d-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-clfjm\" (UID: \"4f705cc6-53c8-4781-b33b-d0e5a386a22d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-clfjm" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.974334 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-etcd-client\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.974451 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b8fcd1f9-ed8a-4659-889b-0ac463f9962d-serving-cert\") pod \"service-ca-operator-777779d784-g9nns\" (UID: \"b8fcd1f9-ed8a-4659-889b-0ac463f9962d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.974405 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" event={"ID":"7eea18e5-bc89-4c10-a843-c8b374a239a2","Type":"ContainerStarted","Data":"302938f753799ff5b91d7742a2ba0c225c306102cf51619c5714375db5e8559d"} Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.974825 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b15ca477-8221-4895-bd12-dbd5cfd1bfa9-cert\") pod \"ingress-canary-c5cgw\" (UID: \"b15ca477-8221-4895-bd12-dbd5cfd1bfa9\") " pod="openshift-ingress-canary/ingress-canary-c5cgw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.974999 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a9bfdebe-6e6f-4a2c-baee-e339a0b4048d-srv-cert\") pod \"olm-operator-6b444d44fb-ngqjw\" (UID: \"a9bfdebe-6e6f-4a2c-baee-e339a0b4048d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.975014 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d12dc0b-ae5f-40a1-b3b0-59dfbec22317-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wzd\" (UID: \"9d12dc0b-ae5f-40a1-b3b0-59dfbec22317\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.975628 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-secret-volume\") pod \"collect-profiles-29490645-7md54\" (UID: \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.976632 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3184da5-d52f-4dda-a92f-2832a6f4dd3e-serving-cert\") pod \"etcd-operator-b45778765-5vlmk\" (UID: \"e3184da5-d52f-4dda-a92f-2832a6f4dd3e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.977096 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-p7srw" event={"ID":"18f7273c-10d0-4c81-878f-d2ac07b0fb63","Type":"ContainerStarted","Data":"513ad7afa251e089e98a65583414b489bdf076f62c1be39c6b27e6fb72e293bc"} Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.977277 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/21e659aa-cc22-45fa-ac47-73de9aab039d-certs\") pod \"machine-config-server-v7ff8\" (UID: \"21e659aa-cc22-45fa-ac47-73de9aab039d\") " pod="openshift-machine-config-operator/machine-config-server-v7ff8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.977282 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b61dff80-b5ca-454b-ae88-f45d20097560-proxy-tls\") pod \"machine-config-operator-74547568cd-gtqp8\" (UID: \"b61dff80-b5ca-454b-ae88-f45d20097560\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.977629 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/21e659aa-cc22-45fa-ac47-73de9aab039d-node-bootstrap-token\") pod \"machine-config-server-v7ff8\" (UID: \"21e659aa-cc22-45fa-ac47-73de9aab039d\") " pod="openshift-machine-config-operator/machine-config-server-v7ff8" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.978048 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-m7qhz\" (UID: \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.980106 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v7zhj"] Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.980300 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/99e64d4c-8fe7-4eec-ad1d-c10d740fccbb-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qs9mc\" (UID: \"99e64d4c-8fe7-4eec-ad1d-c10d740fccbb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.981983 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sd92l" event={"ID":"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f","Type":"ContainerStarted","Data":"55841140942e712c9ee8d2a0899034c23f5a7540e61bcfc2c600fa06ee3ee56a"} Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.982031 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-sd92l" event={"ID":"d241e4dc-13ab-49d4-99a0-9fa3d654cb0f","Type":"ContainerStarted","Data":"92ec4e5c15a35e245b0b716f05bc2e5e56cc17d313f704afa95905a99c7485e2"} Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.982146 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/001d6d03-e3da-4ee8-ae26-68e1775403fc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-z44wh\" (UID: \"001d6d03-e3da-4ee8-ae26-68e1775403fc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.982377 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.982830 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/562f489c-010a-4bcf-9db6-524717e4c0eb-proxy-tls\") pod \"machine-config-controller-84d6567774-w6x5c\" (UID: \"562f489c-010a-4bcf-9db6-524717e4c0eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.984065 4823 patch_prober.go:28] interesting pod/console-operator-58897d9998-sd92l container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.984104 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-sd92l" podUID="d241e4dc-13ab-49d4-99a0-9fa3d654cb0f" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.984694 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k4c6\" (UniqueName: \"kubernetes.io/projected/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-kube-api-access-2k4c6\") pod \"collect-profiles-29490645-7md54\" (UID: \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.985417 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b7ec8e0cafd9dc74001d029de80ae7945446f0213548300451552c5b30b4daac"} Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.986596 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.988562 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" event={"ID":"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31","Type":"ContainerStarted","Data":"e66d2f9688f3a165357b306c375aa463b4e0499c4c99a8311ae93f2607052586"} Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.990670 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-z4f2q" event={"ID":"00f5968d-4a95-44bc-9633-0bb7844b3bfb","Type":"ContainerStarted","Data":"d2c92bbd6bc1ddc662e40b045451c2dbd93672926bb90a7f5ca32b6076333ddd"} Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.992521 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f5fc5f2-5f01-40fa-85ad-1f98835115dc-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-vnwd2\" (UID: \"6f5fc5f2-5f01-40fa-85ad-1f98835115dc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:12 crc kubenswrapper[4823]: I0126 14:49:12.999575 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr2p9\" (UniqueName: \"kubernetes.io/projected/9d12dc0b-ae5f-40a1-b3b0-59dfbec22317-kube-api-access-fr2p9\") pod \"kube-storage-version-migrator-operator-b67b599dd-d7wzd\" (UID: \"9d12dc0b-ae5f-40a1-b3b0-59dfbec22317\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.005591 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.028467 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6eaabec-1376-4e26-898a-70d39fad7903-srv-cert\") pod \"catalog-operator-68c6474976-n5rd7\" (UID: \"b6eaabec-1376-4e26-898a-70d39fad7903\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.028666 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.028718 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8jss\" (UniqueName: \"kubernetes.io/projected/b6eaabec-1376-4e26-898a-70d39fad7903-kube-api-access-x8jss\") pod \"catalog-operator-68c6474976-n5rd7\" (UID: \"b6eaabec-1376-4e26-898a-70d39fad7903\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.028762 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6eaabec-1376-4e26-898a-70d39fad7903-profile-collector-cert\") pod \"catalog-operator-68c6474976-n5rd7\" (UID: \"b6eaabec-1376-4e26-898a-70d39fad7903\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.033335 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:13.533317474 +0000 UTC m=+150.218780649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.038719 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6eaabec-1376-4e26-898a-70d39fad7903-profile-collector-cert\") pod \"catalog-operator-68c6474976-n5rd7\" (UID: \"b6eaabec-1376-4e26-898a-70d39fad7903\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.040734 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6eaabec-1376-4e26-898a-70d39fad7903-srv-cert\") pod \"catalog-operator-68c6474976-n5rd7\" (UID: \"b6eaabec-1376-4e26-898a-70d39fad7903\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.049074 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ggmf\" (UniqueName: \"kubernetes.io/projected/c5857fd5-1c26-4ffd-a779-df738b7ad0b9-kube-api-access-8ggmf\") pod \"migrator-59844c95c7-nnkhw\" (UID: \"c5857fd5-1c26-4ffd-a779-df738b7ad0b9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nnkhw" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.057696 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nnkhw" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.092269 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgd2n\" (UniqueName: \"kubernetes.io/projected/029bd494-0ffa-4390-995e-bb26fdbbfbe7-kube-api-access-vgd2n\") pod \"dns-default-zhskc\" (UID: \"029bd494-0ffa-4390-995e-bb26fdbbfbe7\") " pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.092567 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrhqq\" (UniqueName: \"kubernetes.io/projected/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-kube-api-access-qrhqq\") pod \"marketplace-operator-79b997595-m7qhz\" (UID: \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\") " pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.116565 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf2km\" (UniqueName: \"kubernetes.io/projected/4f705cc6-53c8-4781-b33b-d0e5a386a22d-kube-api-access-hf2km\") pod \"multus-admission-controller-857f4d67dd-clfjm\" (UID: \"4f705cc6-53c8-4781-b33b-d0e5a386a22d\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-clfjm" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.129610 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.130143 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:13.630123367 +0000 UTC m=+150.315586472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.151649 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-926mt\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-kube-api-access-926mt\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.178709 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq"] Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.183849 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6f5fc5f2-5f01-40fa-85ad-1f98835115dc-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-vnwd2\" (UID: \"6f5fc5f2-5f01-40fa-85ad-1f98835115dc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.187083 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.189780 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct44j\" (UniqueName: \"kubernetes.io/projected/b61dff80-b5ca-454b-ae88-f45d20097560-kube-api-access-ct44j\") pod \"machine-config-operator-74547568cd-gtqp8\" (UID: \"b61dff80-b5ca-454b-ae88-f45d20097560\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.222877 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.224348 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxdrw\" (UniqueName: \"kubernetes.io/projected/6f5fc5f2-5f01-40fa-85ad-1f98835115dc-kube-api-access-rxdrw\") pod \"cluster-image-registry-operator-dc59b4c8b-vnwd2\" (UID: \"6f5fc5f2-5f01-40fa-85ad-1f98835115dc\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.229573 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g25f\" (UniqueName: \"kubernetes.io/projected/b15ca477-8221-4895-bd12-dbd5cfd1bfa9-kube-api-access-5g25f\") pod \"ingress-canary-c5cgw\" (UID: \"b15ca477-8221-4895-bd12-dbd5cfd1bfa9\") " pod="openshift-ingress-canary/ingress-canary-c5cgw" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.241505 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-c5cgw" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.242355 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.242795 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:13.742782159 +0000 UTC m=+150.428245264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.259459 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6nc\" (UniqueName: \"kubernetes.io/projected/21e659aa-cc22-45fa-ac47-73de9aab039d-kube-api-access-ms6nc\") pod \"machine-config-server-v7ff8\" (UID: \"21e659aa-cc22-45fa-ac47-73de9aab039d\") " pod="openshift-machine-config-operator/machine-config-server-v7ff8" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.280760 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-bound-sa-token\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.281179 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn5vs\" (UniqueName: \"kubernetes.io/projected/99e64d4c-8fe7-4eec-ad1d-c10d740fccbb-kube-api-access-mn5vs\") pod \"package-server-manager-789f6589d5-qs9mc\" (UID: \"99e64d4c-8fe7-4eec-ad1d-c10d740fccbb\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.290085 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.298997 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.305554 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rcth\" (UniqueName: \"kubernetes.io/projected/001d6d03-e3da-4ee8-ae26-68e1775403fc-kube-api-access-2rcth\") pod \"cluster-samples-operator-665b6dd947-z44wh\" (UID: \"001d6d03-e3da-4ee8-ae26-68e1775403fc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.326127 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.343998 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.344678 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:13.844660436 +0000 UTC m=+150.530123541 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.344970 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh27r\" (UniqueName: \"kubernetes.io/projected/562f489c-010a-4bcf-9db6-524717e4c0eb-kube-api-access-zh27r\") pod \"machine-config-controller-84d6567774-w6x5c\" (UID: \"562f489c-010a-4bcf-9db6-524717e4c0eb\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.347276 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vx8q\" (UniqueName: \"kubernetes.io/projected/7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942-kube-api-access-2vx8q\") pod \"service-ca-9c57cc56f-km977\" (UID: \"7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942\") " pod="openshift-service-ca/service-ca-9c57cc56f-km977" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.360629 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.372777 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmklh\" (UniqueName: \"kubernetes.io/projected/a9bfdebe-6e6f-4a2c-baee-e339a0b4048d-kube-api-access-tmklh\") pod \"olm-operator-6b444d44fb-ngqjw\" (UID: \"a9bfdebe-6e6f-4a2c-baee-e339a0b4048d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.376835 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-clfjm" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.379567 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmszb\" (UniqueName: \"kubernetes.io/projected/010c3f80-32bc-4a56-b1e9-7503e757192f-kube-api-access-bmszb\") pod \"control-plane-machine-set-operator-78cbb6b69f-fhzvg\" (UID: \"010c3f80-32bc-4a56-b1e9-7503e757192f\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.388617 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.414171 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrhw\" (UniqueName: \"kubernetes.io/projected/b8fcd1f9-ed8a-4659-889b-0ac463f9962d-kube-api-access-mnrhw\") pod \"service-ca-operator-777779d784-g9nns\" (UID: \"b8fcd1f9-ed8a-4659-889b-0ac463f9962d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.429820 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj4r7\" (UniqueName: \"kubernetes.io/projected/e753db28-0960-4c2a-bd93-00e8cd25ad61-kube-api-access-xj4r7\") pod \"packageserver-d55dfcdfc-5zb4r\" (UID: \"e753db28-0960-4c2a-bd93-00e8cd25ad61\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.431841 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4"] Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.440353 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6vd9x"] Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.440546 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxj99\" (UniqueName: \"kubernetes.io/projected/189c2b61-53a3-4182-b251-2b8e6feddbcf-kube-api-access-jxj99\") pod \"csi-hostpathplugin-hkfx2\" (UID: \"189c2b61-53a3-4182-b251-2b8e6feddbcf\") " pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.446943 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.447560 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.447859 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:13.947847558 +0000 UTC m=+150.633310653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.459851 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-km977" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.467188 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cdl85"] Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.468196 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8jss\" (UniqueName: \"kubernetes.io/projected/b6eaabec-1376-4e26-898a-70d39fad7903-kube-api-access-x8jss\") pod \"catalog-operator-68c6474976-n5rd7\" (UID: \"b6eaabec-1376-4e26-898a-70d39fad7903\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.475085 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.480403 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-v7ff8" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.491433 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5b7zm"] Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.508182 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.522480 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.548628 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.548818 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:14.04878808 +0000 UTC m=+150.734251185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.548996 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.549351 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:14.049340364 +0000 UTC m=+150.734803549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.613482 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.637710 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.642198 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.651099 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.651220 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:14.151200131 +0000 UTC m=+150.836663236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.654601 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.654971 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:14.154960411 +0000 UTC m=+150.840423516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.710026 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.765880 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.766235 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:14.266215985 +0000 UTC m=+150.951679080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:13 crc kubenswrapper[4823]: W0126 14:49:13.772414 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2f8927c_1301_492d_ae9a_487ec70b3038.slice/crio-8dabe89121432398c448fcd83cad66fedbdd4049f61afde59af445d4663c7326 WatchSource:0}: Error finding container 8dabe89121432398c448fcd83cad66fedbdd4049f61afde59af445d4663c7326: Status 404 returned error can't find the container with id 8dabe89121432398c448fcd83cad66fedbdd4049f61afde59af445d4663c7326 Jan 26 14:49:13 crc kubenswrapper[4823]: W0126 14:49:13.834124 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11ef8e8e_11b6_4db0_a9df_a4d9c2b4567a.slice/crio-ae6f35412e1cae7105e771f219aefdf26c2af872c1ecab95b125213112494022 WatchSource:0}: Error finding container ae6f35412e1cae7105e771f219aefdf26c2af872c1ecab95b125213112494022: Status 404 returned error can't find the container with id ae6f35412e1cae7105e771f219aefdf26c2af872c1ecab95b125213112494022 Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.867012 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.867326 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:14.367314422 +0000 UTC m=+151.052777527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.968288 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.968686 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:14.468631124 +0000 UTC m=+151.154094229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:13 crc kubenswrapper[4823]: I0126 14:49:13.969666 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:13 crc kubenswrapper[4823]: E0126 14:49:13.970099 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:14.470082372 +0000 UTC m=+151.155545467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.070641 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:14 crc kubenswrapper[4823]: E0126 14:49:14.071206 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:14.571186579 +0000 UTC m=+151.256649684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.125733 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-p7srw" event={"ID":"18f7273c-10d0-4c81-878f-d2ac07b0fb63","Type":"ContainerStarted","Data":"225b9a0e1beea34260a4a6ce4b35909d6854fe81167ae54787137aefa9c9a351"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.192532 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:14 crc kubenswrapper[4823]: E0126 14:49:14.195316 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:14.695295353 +0000 UTC m=+151.380758658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.203719 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-z4f2q" event={"ID":"00f5968d-4a95-44bc-9633-0bb7844b3bfb","Type":"ContainerStarted","Data":"7d5c177d2b0ed9f61ba8290a2f75e80fe149cf2fb7bfc8fcb7cbe75f27d08183"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.222579 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-sd92l" podStartSLOduration=127.222557861 podStartE2EDuration="2m7.222557861s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:14.220782395 +0000 UTC m=+150.906245510" watchObservedRunningTime="2026-01-26 14:49:14.222557861 +0000 UTC m=+150.908020966" Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.252846 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4m2g6"] Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.296804 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" event={"ID":"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36","Type":"ContainerStarted","Data":"78fb46fbf03283485e7b02a8e9d19a866c2e411ecd0de5a7931f2cfb7e626104"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.296852 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" event={"ID":"d88c9c1d-3f83-4a0a-b996-a012f7a0dd36","Type":"ContainerStarted","Data":"8912daa5a8bb885c88d921c1412dd24038efdd0b0ef2af8086c292adb1fe30ea"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.297876 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:14 crc kubenswrapper[4823]: E0126 14:49:14.298286 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:14.798268198 +0000 UTC m=+151.483731303 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.331831 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb"] Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.336725 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v7ff8" event={"ID":"21e659aa-cc22-45fa-ac47-73de9aab039d","Type":"ContainerStarted","Data":"9f2f3dde46e95ce556a1f26d941e0e98914b1d482eb8ce8c43fe823626d5d889"} Jan 26 14:49:14 crc kubenswrapper[4823]: W0126 14:49:14.345191 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod204a2df3_b8d7_4998_8ff1_3c3a6112c666.slice/crio-f847209d8ce0e268e5705c1ff5a5bea0ebaad3f1807e7cb8415d73031eec1544 WatchSource:0}: Error finding container f847209d8ce0e268e5705c1ff5a5bea0ebaad3f1807e7cb8415d73031eec1544: Status 404 returned error can't find the container with id f847209d8ce0e268e5705c1ff5a5bea0ebaad3f1807e7cb8415d73031eec1544 Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.380121 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" event={"ID":"7eea18e5-bc89-4c10-a843-c8b374a239a2","Type":"ContainerStarted","Data":"93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.382473 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.383218 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" event={"ID":"3e8743ea-e343-4d36-8d1f-645b59c9a7fd","Type":"ContainerStarted","Data":"e2984ca61b85bf87c3d3c81673b4b0e22c1429a629974e1db45e858e3f678026"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.397565 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.397615 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" event={"ID":"6a33769f-089d-482c-bb7c-5569c4a078a7","Type":"ContainerStarted","Data":"f400b9a7d89c4958b09782dbe52df87f13f44bf1b46d76c3974d56019cacc287"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.404790 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:14 crc kubenswrapper[4823]: E0126 14:49:14.407311 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:14.906025511 +0000 UTC m=+151.591488616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.441989 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl"] Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.446448 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" event={"ID":"ab42db39-920c-4bd5-b524-d3c649e24f67","Type":"ContainerStarted","Data":"832c86fcc1b3619d8a2def86d6ed37fe97abc300eb137e2a7ef19dcfa8ef2bc4"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.463741 4823 generic.go:334] "Generic (PLEG): container finished" podID="b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31" containerID="cc406434277c39a590b2eaf7a0d7699a04d0005f4a6366c16c87e12da40537ca" exitCode=0 Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.463857 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" event={"ID":"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31","Type":"ContainerDied","Data":"cc406434277c39a590b2eaf7a0d7699a04d0005f4a6366c16c87e12da40537ca"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.506228 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:14 crc kubenswrapper[4823]: E0126 14:49:14.507824 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:15.007804345 +0000 UTC m=+151.693267450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.512820 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5b7zm" event={"ID":"4609bcb4-b5ef-43fa-85be-2d897f635951","Type":"ContainerStarted","Data":"b32b3c39a97a7cc57a74ba640d48b03f8dd5d668524e154c2f74a052534b718a"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.525174 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" event={"ID":"11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a","Type":"ContainerStarted","Data":"ae6f35412e1cae7105e771f219aefdf26c2af872c1ecab95b125213112494022"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.526851 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" event={"ID":"b942d06c-fac8-4546-98a6-f36d0666d0d4","Type":"ContainerStarted","Data":"d75f0bd4f906ec394f8e39190e12f875fdd529ad56b5c2dd3fdc132161542814"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.535497 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" event={"ID":"c2f8927c-1301-492d-ae9a-487ec70b3038","Type":"ContainerStarted","Data":"8dabe89121432398c448fcd83cad66fedbdd4049f61afde59af445d4663c7326"} Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.609739 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:14 crc kubenswrapper[4823]: E0126 14:49:14.611100 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:15.111080039 +0000 UTC m=+151.796543194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.635880 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-sd92l" Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.679735 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.711831 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:14 crc kubenswrapper[4823]: E0126 14:49:14.713141 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:15.2131 +0000 UTC m=+151.898563105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:14 crc kubenswrapper[4823]: I0126 14:49:14.913705 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:14 crc kubenswrapper[4823]: E0126 14:49:14.914079 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:15.41406638 +0000 UTC m=+152.099529485 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.027326 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:15 crc kubenswrapper[4823]: E0126 14:49:15.027569 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:15.527528843 +0000 UTC m=+152.212991948 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.027922 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:15 crc kubenswrapper[4823]: E0126 14:49:15.028305 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:15.528285893 +0000 UTC m=+152.213748998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:15 crc kubenswrapper[4823]: E0126 14:49:15.166454 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:15.666438467 +0000 UTC m=+152.351901572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.166353 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.166759 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:15 crc kubenswrapper[4823]: E0126 14:49:15.167062 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:15.667053134 +0000 UTC m=+152.352516249 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.195132 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:15 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:15 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:15 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.195183 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.278528 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:15 crc kubenswrapper[4823]: E0126 14:49:15.296686 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:15.796653632 +0000 UTC m=+152.482116747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.297003 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:15 crc kubenswrapper[4823]: E0126 14:49:15.297809 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:15.797774771 +0000 UTC m=+152.483237876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.400497 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:15 crc kubenswrapper[4823]: E0126 14:49:15.400959 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:15.900940072 +0000 UTC m=+152.586403187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.433132 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld"] Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.435594 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bbxp2"] Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.459274 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m"] Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.487720 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-p7srw" podStartSLOduration=128.487700281 podStartE2EDuration="2m8.487700281s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:15.469711697 +0000 UTC m=+152.155174812" watchObservedRunningTime="2026-01-26 14:49:15.487700281 +0000 UTC m=+152.173163386" Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.504293 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:15 crc kubenswrapper[4823]: E0126 14:49:15.504599 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:16.004588416 +0000 UTC m=+152.690051521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.552308 4823 generic.go:334] "Generic (PLEG): container finished" podID="3e8743ea-e343-4d36-8d1f-645b59c9a7fd" containerID="d1f2cc4ab733bdcb49426fcbf952ebfa1076f5c119fd251c5dcfde5e494800ae" exitCode=0 Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.552400 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" event={"ID":"3e8743ea-e343-4d36-8d1f-645b59c9a7fd","Type":"ContainerDied","Data":"d1f2cc4ab733bdcb49426fcbf952ebfa1076f5c119fd251c5dcfde5e494800ae"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.582846 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" event={"ID":"ab42db39-920c-4bd5-b524-d3c649e24f67","Type":"ContainerStarted","Data":"108d0d5996556fd95d7a42dbf6182b367bf95769b58ca22b8bd46280abab7d4b"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.586101 4823 generic.go:334] "Generic (PLEG): container finished" podID="11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a" containerID="010b50fccd9d984f53ec8a03f21c720532921126e172e967e7b2cbf5206dab5b" exitCode=0 Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.586193 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" event={"ID":"11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a","Type":"ContainerDied","Data":"010b50fccd9d984f53ec8a03f21c720532921126e172e967e7b2cbf5206dab5b"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.606062 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:15 crc kubenswrapper[4823]: E0126 14:49:15.606542 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:16.106522004 +0000 UTC m=+152.791985109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.613556 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-v7ff8" event={"ID":"21e659aa-cc22-45fa-ac47-73de9aab039d","Type":"ContainerStarted","Data":"9b5c06da3594eb3ab4c32421a440c3cffe4e40538a8e9cfcb29fa7f290453e6d"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.633735 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" event={"ID":"6a33769f-089d-482c-bb7c-5569c4a078a7","Type":"ContainerStarted","Data":"a51d3bb187b85cd0edca1c25f621a5419b925c05330963741c93b316ee8befeb"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.651059 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-z4f2q" event={"ID":"00f5968d-4a95-44bc-9633-0bb7844b3bfb","Type":"ContainerStarted","Data":"1cdcb5146a8ea054e2ac98e0a5abdcec0071a1365457ecdb11c2c4d7557b6ec0"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.653094 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bbxp2" event={"ID":"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788","Type":"ContainerStarted","Data":"3fe2611bb818290d71bdd55ec48c6857d52f238a4d29e8bb57e7b923da372c35"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.667615 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5b7zm" event={"ID":"4609bcb4-b5ef-43fa-85be-2d897f635951","Type":"ContainerStarted","Data":"ba43dd26a73790abb8822133aa68c3d720b150e646e5fb6df3567b35f24c5465"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.668643 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5b7zm" Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.670470 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.670506 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.674668 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-v7zhj" podStartSLOduration=128.674656762 podStartE2EDuration="2m8.674656762s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:15.558636951 +0000 UTC m=+152.244100056" watchObservedRunningTime="2026-01-26 14:49:15.674656762 +0000 UTC m=+152.360119867" Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.676121 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" podStartSLOduration=128.67611598 podStartE2EDuration="2m8.67611598s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:15.670134852 +0000 UTC m=+152.355597957" watchObservedRunningTime="2026-01-26 14:49:15.67611598 +0000 UTC m=+152.361579085" Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.686314 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" event={"ID":"c2f8927c-1301-492d-ae9a-487ec70b3038","Type":"ContainerStarted","Data":"4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.687035 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.688273 4823 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-tdvm4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.688310 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" podUID="c2f8927c-1301-492d-ae9a-487ec70b3038" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.690333 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" event={"ID":"204a2df3-b8d7-4998-8ff1-3c3a6112c666","Type":"ContainerStarted","Data":"f847209d8ce0e268e5705c1ff5a5bea0ebaad3f1807e7cb8415d73031eec1544"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.694285 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" event={"ID":"90fe06d2-db90-4559-855a-18be3ede4ad5","Type":"ContainerStarted","Data":"6884e621b5e6fc49242c77f3f3fad96a9556445da667522f03e35ad2cc245521"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.701402 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" event={"ID":"23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae","Type":"ContainerStarted","Data":"ae4437e6e75673d7c49d66d8dc3956587b103a0cddc5ee5b796eaae8fd6fe11a"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.710034 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" event={"ID":"b942d06c-fac8-4546-98a6-f36d0666d0d4","Type":"ContainerStarted","Data":"43d6a40d7311c3cc7e46a1d3d836c442fa978a42ca3f41b62273b10b2d816005"} Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.710073 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.710978 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:15 crc kubenswrapper[4823]: E0126 14:49:15.713867 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:16.213854396 +0000 UTC m=+152.899317501 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.751555 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5b7zm" podStartSLOduration=128.751538579 podStartE2EDuration="2m8.751538579s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:15.750770409 +0000 UTC m=+152.436233514" watchObservedRunningTime="2026-01-26 14:49:15.751538579 +0000 UTC m=+152.437001684" Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.757107 4823 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6vd9x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.757175 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" podUID="b942d06c-fac8-4546-98a6-f36d0666d0d4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.759827 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h"] Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.814219 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:15 crc kubenswrapper[4823]: E0126 14:49:15.815857 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:16.315841905 +0000 UTC m=+153.001305010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.858920 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-rww8p" podStartSLOduration=128.858889752 podStartE2EDuration="2m8.858889752s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:15.853348275 +0000 UTC m=+152.538811380" watchObservedRunningTime="2026-01-26 14:49:15.858889752 +0000 UTC m=+152.544352857" Jan 26 14:49:15 crc kubenswrapper[4823]: I0126 14:49:15.918840 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:15 crc kubenswrapper[4823]: E0126 14:49:15.919603 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:16.419585152 +0000 UTC m=+153.105048257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.043703 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:16 crc kubenswrapper[4823]: E0126 14:49:16.044375 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:16.544329762 +0000 UTC m=+153.229792927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.107687 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-v7ff8" podStartSLOduration=6.107673553 podStartE2EDuration="6.107673553s" podCreationTimestamp="2026-01-26 14:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:16.074311253 +0000 UTC m=+152.759774358" watchObservedRunningTime="2026-01-26 14:49:16.107673553 +0000 UTC m=+152.793136658" Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.125770 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-z4f2q" podStartSLOduration=129.12575515 podStartE2EDuration="2m9.12575515s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:16.106894452 +0000 UTC m=+152.792357547" watchObservedRunningTime="2026-01-26 14:49:16.12575515 +0000 UTC m=+152.811218255" Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.128125 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4kchq" podStartSLOduration=129.128111432 podStartE2EDuration="2m9.128111432s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:16.127732831 +0000 UTC m=+152.813195936" watchObservedRunningTime="2026-01-26 14:49:16.128111432 +0000 UTC m=+152.813574537" Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.145395 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:16 crc kubenswrapper[4823]: E0126 14:49:16.146047 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:16.646033345 +0000 UTC m=+153.331496450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.155179 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" podStartSLOduration=129.155153405 podStartE2EDuration="2m9.155153405s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:16.149418744 +0000 UTC m=+152.834881849" watchObservedRunningTime="2026-01-26 14:49:16.155153405 +0000 UTC m=+152.840616510" Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.174267 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" podStartSLOduration=128.174251179 podStartE2EDuration="2m8.174251179s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:16.173616972 +0000 UTC m=+152.859080077" watchObservedRunningTime="2026-01-26 14:49:16.174251179 +0000 UTC m=+152.859714274" Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.223937 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" podStartSLOduration=129.223915129 podStartE2EDuration="2m9.223915129s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:16.218774993 +0000 UTC m=+152.904238168" watchObservedRunningTime="2026-01-26 14:49:16.223915129 +0000 UTC m=+152.909378234" Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.247223 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:16 crc kubenswrapper[4823]: E0126 14:49:16.247695 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:16.747675525 +0000 UTC m=+153.433138630 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.350216 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:16 crc kubenswrapper[4823]: E0126 14:49:16.350644 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:16.850626731 +0000 UTC m=+153.536089906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.452185 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:16 crc kubenswrapper[4823]: E0126 14:49:16.452617 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:16.95259681 +0000 UTC m=+153.638059915 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.554395 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:16 crc kubenswrapper[4823]: E0126 14:49:16.554836 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:17.054816297 +0000 UTC m=+153.740279402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.664242 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:16 crc kubenswrapper[4823]: E0126 14:49:16.664655 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:17.164641613 +0000 UTC m=+153.850104718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.726503 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.726779 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.787479 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:16 crc kubenswrapper[4823]: E0126 14:49:16.787851 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:17.287837373 +0000 UTC m=+153.973300478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:16 crc kubenswrapper[4823]: I0126 14:49:16.949973 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:16 crc kubenswrapper[4823]: E0126 14:49:16.950440 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:17.450422531 +0000 UTC m=+154.135885636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.053485 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:17 crc kubenswrapper[4823]: E0126 14:49:17.053862 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:17.553831258 +0000 UTC m=+154.239294363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.054726 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" event={"ID":"11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a","Type":"ContainerStarted","Data":"b899342f121a2ee75ad6fee19afb3a27976eb5eef33126498c41ac2190c4b8e0"} Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.055385 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.059925 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-l9mhl" event={"ID":"90fe06d2-db90-4559-855a-18be3ede4ad5","Type":"ContainerStarted","Data":"cfe2eec55b870b0c53eefc94c1347819a28090d73f45d67e5a559b99acc7b60a"} Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.064066 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" event={"ID":"23c73e7e-c5e9-4b0d-9a3a-169d3e3689ae","Type":"ContainerStarted","Data":"fb2d052dfab8c028080aa194bd7b5b63b282778024d1c6df8b42684a442666b9"} Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.077761 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" event={"ID":"0f066caa-2e70-4ef1-ae84-da5b204e0d25","Type":"ContainerStarted","Data":"832e7c3d26081052e981433f9371616814b871ce2e68e99805236cd1c94ac642"} Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.097066 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" event={"ID":"204a2df3-b8d7-4998-8ff1-3c3a6112c666","Type":"ContainerStarted","Data":"ddb942db71f0ace8baa9859ca13b761d991ebb3318c96e2d844109f5a474fc9d"} Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.097119 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" event={"ID":"204a2df3-b8d7-4998-8ff1-3c3a6112c666","Type":"ContainerStarted","Data":"86b977b7a90540af2c2a3287be80f2ac97c5139f9909908c5aa98b3380e1ce30"} Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.098374 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bbxp2" event={"ID":"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788","Type":"ContainerStarted","Data":"d4bec3dfc2bfadf6f6187733c8ac5aa0c40933408a656efa55da499be54861d2"} Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.100383 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" event={"ID":"63b17ba3-023d-46ef-9b4e-1936166074bc","Type":"ContainerStarted","Data":"5af20a6944db24544a52db6d9939fa6364b415c2f327af4bdbb64010f7b3e958"} Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.103233 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" event={"ID":"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31","Type":"ContainerStarted","Data":"ad0be987527382f93bbb5cc3fa94cd7033fab687d9ca36c7e324fe77d0bd9ef4"} Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.105203 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" event={"ID":"1fce63dc-472e-4a08-b2c0-0228c9f41cc4","Type":"ContainerStarted","Data":"8ac50cc9e0e4795f7abd13aa5c3f4e5d2a6bc1d0c52f818d7f556d769b0b516a"} Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.111931 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.111989 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.114425 4823 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6vd9x container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" start-of-body= Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.114477 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" podUID="b942d06c-fac8-4546-98a6-f36d0666d0d4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.21:6443/healthz\": dial tcp 10.217.0.21:6443: connect: connection refused" Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.154884 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:17 crc kubenswrapper[4823]: E0126 14:49:17.156145 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:17.656123477 +0000 UTC m=+154.341586582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.259347 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:17 crc kubenswrapper[4823]: E0126 14:49:17.259731 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:17.759717039 +0000 UTC m=+154.445180144 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.287156 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:17 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:17 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:17 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.287214 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.337160 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" podStartSLOduration=130.337139631 podStartE2EDuration="2m10.337139631s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:17.163223944 +0000 UTC m=+153.848687049" watchObservedRunningTime="2026-01-26 14:49:17.337139631 +0000 UTC m=+154.022602736" Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.363850 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:17 crc kubenswrapper[4823]: E0126 14:49:17.364405 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:17.86438897 +0000 UTC m=+154.549852075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.406186 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4m2g6" podStartSLOduration=130.406163972 podStartE2EDuration="2m10.406163972s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:17.344720911 +0000 UTC m=+154.030184016" watchObservedRunningTime="2026-01-26 14:49:17.406163972 +0000 UTC m=+154.091627087" Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.456660 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kp4gb" podStartSLOduration=130.456640403 podStartE2EDuration="2m10.456640403s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:17.401844777 +0000 UTC m=+154.087307892" watchObservedRunningTime="2026-01-26 14:49:17.456640403 +0000 UTC m=+154.142103508" Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.457097 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-c5cgw"] Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.467090 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:17 crc kubenswrapper[4823]: E0126 14:49:17.467567 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:17.967553051 +0000 UTC m=+154.653016156 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.472249 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5vlmk"] Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.579775 4823 csr.go:261] certificate signing request csr-t4mxw is approved, waiting to be issued Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.584181 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-bbxp2" podStartSLOduration=130.584168446 podStartE2EDuration="2m10.584168446s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:17.579994777 +0000 UTC m=+154.265457882" watchObservedRunningTime="2026-01-26 14:49:17.584168446 +0000 UTC m=+154.269631551" Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.584848 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-nnkhw"] Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.591441 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:17 crc kubenswrapper[4823]: E0126 14:49:17.591679 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:18.091664374 +0000 UTC m=+154.777127469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.630713 4823 csr.go:257] certificate signing request csr-t4mxw is issued Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.646500 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw"] Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.730858 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:17 crc kubenswrapper[4823]: E0126 14:49:17.731342 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:18.231330078 +0000 UTC m=+154.916793183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.751403 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r"] Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.753937 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m7qhz"] Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.756113 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.772297 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:17 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:17 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:17 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.772349 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.863719 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:17 crc kubenswrapper[4823]: E0126 14:49:17.864019 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:18.364005887 +0000 UTC m=+155.049468982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.953167 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54"] Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.953217 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-g9nns"] Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.959243 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zhskc"] Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.959288 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc"] Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.959299 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hkfx2"] Jan 26 14:49:17 crc kubenswrapper[4823]: I0126 14:49:17.990585 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:17 crc kubenswrapper[4823]: E0126 14:49:17.990921 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:18.490910575 +0000 UTC m=+155.176373680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.119977 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:18 crc kubenswrapper[4823]: E0126 14:49:18.120203 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:18.620188465 +0000 UTC m=+155.305651570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.142346 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2"] Jan 26 14:49:18 crc kubenswrapper[4823]: W0126 14:49:18.144112 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod029bd494_0ffa_4390_995e_bb26fdbbfbe7.slice/crio-665075c47d718e02ec62a7db3fc74d9006a6bc807b22035a930d3ca4e52bc090 WatchSource:0}: Error finding container 665075c47d718e02ec62a7db3fc74d9006a6bc807b22035a930d3ca4e52bc090: Status 404 returned error can't find the container with id 665075c47d718e02ec62a7db3fc74d9006a6bc807b22035a930d3ca4e52bc090 Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.166308 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-clfjm"] Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.258555 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:18 crc kubenswrapper[4823]: E0126 14:49:18.258882 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:18.758869392 +0000 UTC m=+155.444332497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.288210 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg"] Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.288266 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh"] Jan 26 14:49:18 crc kubenswrapper[4823]: W0126 14:49:18.290955 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f5fc5f2_5f01_40fa_85ad_1f98835115dc.slice/crio-9b76831117d7f672cad6f10a3a2cc890e6fb437cce76688631fd35896d33821c WatchSource:0}: Error finding container 9b76831117d7f672cad6f10a3a2cc890e6fb437cce76688631fd35896d33821c: Status 404 returned error can't find the container with id 9b76831117d7f672cad6f10a3a2cc890e6fb437cce76688631fd35896d33821c Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.312781 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" event={"ID":"b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31","Type":"ContainerStarted","Data":"498e7e437db4670dd08b1d17081a3686d9b8eab4eaa2f082a1d07776a70289d2"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.316052 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nnkhw" event={"ID":"c5857fd5-1c26-4ffd-a779-df738b7ad0b9","Type":"ContainerStarted","Data":"de545b49ab59a95dc2b1e16cb6f0a68b6ad7555dae59bc1be27f6af7dc25c68a"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.382849 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" event={"ID":"e3184da5-d52f-4dda-a92f-2832a6f4dd3e","Type":"ContainerStarted","Data":"c5d56864cf3c48abea41f6d64eca9ecd7fcdaedec4497a75349b677651a4830c"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.384077 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" event={"ID":"a9bfdebe-6e6f-4a2c-baee-e339a0b4048d","Type":"ContainerStarted","Data":"c10d23b0e9b9859a96f7a879656f2b913b34ddbe7b08cdc5a80fc38ae37a7fe4"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.428277 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:18 crc kubenswrapper[4823]: E0126 14:49:18.429156 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:18.929135503 +0000 UTC m=+155.614598608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.475339 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c"] Jan 26 14:49:18 crc kubenswrapper[4823]: W0126 14:49:18.487676 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod010c3f80_32bc_4a56_b1e9_7503e757192f.slice/crio-3eca9692419f6e650e359b6f55fd6df145d0f1ef716e29ca4ef8cc3710b0cbe2 WatchSource:0}: Error finding container 3eca9692419f6e650e359b6f55fd6df145d0f1ef716e29ca4ef8cc3710b0cbe2: Status 404 returned error can't find the container with id 3eca9692419f6e650e359b6f55fd6df145d0f1ef716e29ca4ef8cc3710b0cbe2 Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.530263 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:18 crc kubenswrapper[4823]: E0126 14:49:18.536000 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:19.035982691 +0000 UTC m=+155.721445796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.558546 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" event={"ID":"b8fcd1f9-ed8a-4659-889b-0ac463f9962d","Type":"ContainerStarted","Data":"a9843470716c449ce89e186f6653ed8dd2491d7f941a8a723e4715f2d7b6ee6e"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.608785 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-c5cgw" event={"ID":"b15ca477-8221-4895-bd12-dbd5cfd1bfa9","Type":"ContainerStarted","Data":"2597b394e86f792b2a4bfd27ec48528dff0b0ce24831f0faf478a7de47559f5b"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.631910 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-26 14:44:17 +0000 UTC, rotation deadline is 2026-11-20 11:08:09.780790001 +0000 UTC Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.631937 4823 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7148h18m51.148854849s for next certificate rotation Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.644172 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" event={"ID":"3e8743ea-e343-4d36-8d1f-645b59c9a7fd","Type":"ContainerStarted","Data":"43f9be1684a38dbd0864a17b8c686c02644c043ac1de97f43876a86f578f6203"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.652310 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:18 crc kubenswrapper[4823]: E0126 14:49:18.652532 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:19.152511345 +0000 UTC m=+155.837974450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.652788 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:18 crc kubenswrapper[4823]: E0126 14:49:18.654249 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:19.15422642 +0000 UTC m=+155.839689525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.688171 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:18 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:18 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:18 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.688226 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.688440 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" event={"ID":"e753db28-0960-4c2a-bd93-00e8cd25ad61","Type":"ContainerStarted","Data":"ea106f4497bd81c29ad17d0d73d88cd73aaefeb9b78c15035de7986db1377e3a"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.706760 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" event={"ID":"1fce63dc-472e-4a08-b2c0-0228c9f41cc4","Type":"ContainerStarted","Data":"cfe53186b415887d0078aade16ab1f87c9a8a6da1108a1c1295c84533aee8c47"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.747181 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" event={"ID":"99e64d4c-8fe7-4eec-ad1d-c10d740fccbb","Type":"ContainerStarted","Data":"5d4b6170da57e3ac6774a5217cd1f38604209818a5639af55284a3e0aac807c3"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.750532 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" event={"ID":"63b17ba3-023d-46ef-9b4e-1936166074bc","Type":"ContainerStarted","Data":"9eaa8087659141f0d376af65e66c7999de9c17bfda144fa3ec1cf3097d9ad90e"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.752754 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" event={"ID":"0f066caa-2e70-4ef1-ae84-da5b204e0d25","Type":"ContainerStarted","Data":"94c56bd3a705e54cb4a7e0d82b2252b9a8d6d02b611c97905173ab2da0c9b136"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.760025 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:18 crc kubenswrapper[4823]: E0126 14:49:18.760394 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:19.260352049 +0000 UTC m=+155.945815154 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.760562 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:18 crc kubenswrapper[4823]: E0126 14:49:18.761226 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:19.261215642 +0000 UTC m=+155.946678837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.768010 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" event={"ID":"b83ec26f-28e8-400b-94f2-e8526e3c0cb3","Type":"ContainerStarted","Data":"cf6162dd2d4f6de69da7452a71ed284cec79f7b37cd99265731fb9a728e414ec"} Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.769306 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.769353 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.862142 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:18 crc kubenswrapper[4823]: E0126 14:49:18.863866 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:19.363848069 +0000 UTC m=+156.049311184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.875012 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-km977"] Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.912112 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" podStartSLOduration=131.912086291 podStartE2EDuration="2m11.912086291s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:18.88585977 +0000 UTC m=+155.571322865" watchObservedRunningTime="2026-01-26 14:49:18.912086291 +0000 UTC m=+155.597549396" Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.912517 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7"] Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.929792 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd"] Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.958960 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8"] Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.965664 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:18 crc kubenswrapper[4823]: E0126 14:49:18.965984 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:19.465972833 +0000 UTC m=+156.151435938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:18 crc kubenswrapper[4823]: I0126 14:49:18.978277 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" podStartSLOduration=130.978259497 podStartE2EDuration="2m10.978259497s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:18.976639983 +0000 UTC m=+155.662103088" watchObservedRunningTime="2026-01-26 14:49:18.978259497 +0000 UTC m=+155.663722602" Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.066629 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:19 crc kubenswrapper[4823]: E0126 14:49:19.067265 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:19.567246894 +0000 UTC m=+156.252710009 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.172768 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:19 crc kubenswrapper[4823]: E0126 14:49:19.173071 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:19.673059475 +0000 UTC m=+156.358522580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.189202 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-llt2m" podStartSLOduration=132.18918769 podStartE2EDuration="2m12.18918769s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:19.187849335 +0000 UTC m=+155.873312450" watchObservedRunningTime="2026-01-26 14:49:19.18918769 +0000 UTC m=+155.874650795" Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.260668 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-c5cgw" podStartSLOduration=9.260647655 podStartE2EDuration="9.260647655s" podCreationTimestamp="2026-01-26 14:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:19.259791923 +0000 UTC m=+155.945255038" watchObservedRunningTime="2026-01-26 14:49:19.260647655 +0000 UTC m=+155.946110760" Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.274849 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:19 crc kubenswrapper[4823]: E0126 14:49:19.275594 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:19.775577959 +0000 UTC m=+156.461041064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.377343 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:19 crc kubenswrapper[4823]: E0126 14:49:19.377778 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:19.877764164 +0000 UTC m=+156.563227269 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.478025 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:19 crc kubenswrapper[4823]: E0126 14:49:19.478284 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:19.978257655 +0000 UTC m=+156.663720770 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.580297 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:19 crc kubenswrapper[4823]: E0126 14:49:19.580746 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:20.080732527 +0000 UTC m=+156.766195632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.681619 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:19 crc kubenswrapper[4823]: E0126 14:49:19.681792 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:20.181746691 +0000 UTC m=+156.867210056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.681844 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:19 crc kubenswrapper[4823]: E0126 14:49:19.682339 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:20.182329327 +0000 UTC m=+156.867792432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.683015 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:19 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:19 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:19 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.683047 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.774243 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-clfjm" event={"ID":"4f705cc6-53c8-4781-b33b-d0e5a386a22d","Type":"ContainerStarted","Data":"ebaf92d71b51e439a9acfe319513e9045c5b81dc7930897b89dbe148eb3fc44a"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.783222 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:19 crc kubenswrapper[4823]: E0126 14:49:19.783417 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:20.283386792 +0000 UTC m=+156.968849897 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.783518 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:19 crc kubenswrapper[4823]: E0126 14:49:19.783872 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:20.283861175 +0000 UTC m=+156.969324280 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.799049 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-c5cgw" event={"ID":"b15ca477-8221-4895-bd12-dbd5cfd1bfa9","Type":"ContainerStarted","Data":"df7d974c0d91443d0ccbbebd8f73a9b463e1e65e14a73d06287e4c97b019e21a"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.819220 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" event={"ID":"189c2b61-53a3-4182-b251-2b8e6feddbcf","Type":"ContainerStarted","Data":"cafd9030a52224c547bb1e168cbd69b0a9e9fc081eb520fc96b3527dbf143d3f"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.834871 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-km977" event={"ID":"7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942","Type":"ContainerStarted","Data":"1d40f1bd1a6974706765ac3cf88eeef6dc833a0e2fbcbc84407a38a547cc3649"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.835845 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" event={"ID":"b6eaabec-1376-4e26-898a-70d39fad7903","Type":"ContainerStarted","Data":"06108573e0419a81cdfc9d62157930e4371994f6d108b4e3a2702c6e4fe14d11"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.836906 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" event={"ID":"1c67988c-1152-41a0-8f2d-2d3a5eb12c46","Type":"ContainerStarted","Data":"8d9f5a5d2dea66e98bdbb18ec0c7f4c0619a6b9a041187cb89aeb36ae237e447"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.836944 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" event={"ID":"1c67988c-1152-41a0-8f2d-2d3a5eb12c46","Type":"ContainerStarted","Data":"b612243016e5d7f486479b0f1f6e338dfc93e558907413d5a1365d338d3ea187"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.838174 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zhskc" event={"ID":"029bd494-0ffa-4390-995e-bb26fdbbfbe7","Type":"ContainerStarted","Data":"665075c47d718e02ec62a7db3fc74d9006a6bc807b22035a930d3ca4e52bc090"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.840153 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nnkhw" event={"ID":"c5857fd5-1c26-4ffd-a779-df738b7ad0b9","Type":"ContainerStarted","Data":"34287e78ddeb83e21db0ad980ee3622441ceae2430b3d8cf8cd393f9c462cc65"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.841479 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" event={"ID":"e3184da5-d52f-4dda-a92f-2832a6f4dd3e","Type":"ContainerStarted","Data":"54214eb792a5c7c75afa82c5061117a233725139622bd0a45a69c1fbd3b6eb40"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.868207 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" event={"ID":"6f5fc5f2-5f01-40fa-85ad-1f98835115dc","Type":"ContainerStarted","Data":"9b76831117d7f672cad6f10a3a2cc890e6fb437cce76688631fd35896d33821c"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.886024 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:19 crc kubenswrapper[4823]: E0126 14:49:19.888670 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:20.387887479 +0000 UTC m=+157.073350644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.888706 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" event={"ID":"562f489c-010a-4bcf-9db6-524717e4c0eb","Type":"ContainerStarted","Data":"c5b1a2b2df4ca5a9ee37ec9bdead4f28c3ca0247d92e3c2bc7190019314d227f"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.891783 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" event={"ID":"9d12dc0b-ae5f-40a1-b3b0-59dfbec22317","Type":"ContainerStarted","Data":"48700a16021273f261771e50e221d66b3d6058aaea097b488b01a4ce2629f52a"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.925343 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" event={"ID":"b83ec26f-28e8-400b-94f2-e8526e3c0cb3","Type":"ContainerStarted","Data":"17a25a841dc22e00936511c31f6e04261b2d3adeab9f54164c9736697042d13b"} Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.953132 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-5vlmk" podStartSLOduration=132.953110419 podStartE2EDuration="2m12.953110419s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:19.953060488 +0000 UTC m=+156.638523613" watchObservedRunningTime="2026-01-26 14:49:19.953110419 +0000 UTC m=+156.638573524" Jan 26 14:49:19 crc kubenswrapper[4823]: I0126 14:49:19.953261 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zq4h" podStartSLOduration=132.953256113 podStartE2EDuration="2m12.953256113s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:19.391106295 +0000 UTC m=+156.076569410" watchObservedRunningTime="2026-01-26 14:49:19.953256113 +0000 UTC m=+156.638719218" Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:19.993408 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh" event={"ID":"001d6d03-e3da-4ee8-ae26-68e1775403fc","Type":"ContainerStarted","Data":"fd5cef2b9535e14573f83b6198ed5a4c9742dba0c5ff6c70096c6dd428c50ff1"} Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.008378 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:20 crc kubenswrapper[4823]: E0126 14:49:20.008820 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:20.508806548 +0000 UTC m=+157.194269653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.021667 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg" event={"ID":"010c3f80-32bc-4a56-b1e9-7503e757192f","Type":"ContainerStarted","Data":"3eca9692419f6e650e359b6f55fd6df145d0f1ef716e29ca4ef8cc3710b0cbe2"} Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.061462 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" event={"ID":"b61dff80-b5ca-454b-ae88-f45d20097560","Type":"ContainerStarted","Data":"3c7ba1e153be2b39ca3e78650691346c73ee04bb65156a67f989925792901e79"} Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.075573 4823 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-cdl85 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.075631 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" podUID="11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.117592 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:20 crc kubenswrapper[4823]: E0126 14:49:20.119549 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:20.619525338 +0000 UTC m=+157.304988443 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.220070 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:20 crc kubenswrapper[4823]: E0126 14:49:20.222131 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:20.722116254 +0000 UTC m=+157.407579359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.322874 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:20 crc kubenswrapper[4823]: E0126 14:49:20.323486 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:20.823470147 +0000 UTC m=+157.508933262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.424318 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:20 crc kubenswrapper[4823]: E0126 14:49:20.424843 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:20.92482917 +0000 UTC m=+157.610292275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.529723 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:20 crc kubenswrapper[4823]: E0126 14:49:20.530119 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:21.030084967 +0000 UTC m=+157.715548092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.631923 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:20 crc kubenswrapper[4823]: E0126 14:49:20.632237 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:21.132225441 +0000 UTC m=+157.817688546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.689382 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:20 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:20 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:20 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.689445 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.736972 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:20 crc kubenswrapper[4823]: E0126 14:49:20.737517 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:21.237502048 +0000 UTC m=+157.922965153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.844765 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:20 crc kubenswrapper[4823]: E0126 14:49:20.845444 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:21.345427815 +0000 UTC m=+158.030890920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:20 crc kubenswrapper[4823]: I0126 14:49:20.946045 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:20 crc kubenswrapper[4823]: E0126 14:49:20.946396 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:21.446381407 +0000 UTC m=+158.131844512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.047275 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:21 crc kubenswrapper[4823]: E0126 14:49:21.047691 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:21.547676559 +0000 UTC m=+158.233139674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.117536 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" event={"ID":"9d12dc0b-ae5f-40a1-b3b0-59dfbec22317","Type":"ContainerStarted","Data":"dc0eb9551f61c336e7294f34128c4f8a5dce5e45362a9380a92ae87e2b1c5d05"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.138795 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh" event={"ID":"001d6d03-e3da-4ee8-ae26-68e1775403fc","Type":"ContainerStarted","Data":"952e534d86e0052c3b310eaac2d4f00f31049e3ff06eda0eb1c3874626e03145"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.138864 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh" event={"ID":"001d6d03-e3da-4ee8-ae26-68e1775403fc","Type":"ContainerStarted","Data":"bfc993115858d2fd7de28cbdeab150e007284d61970522eaca941442f0c37146"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.141319 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-d7wzd" podStartSLOduration=134.141306258 podStartE2EDuration="2m14.141306258s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:21.139933902 +0000 UTC m=+157.825397007" watchObservedRunningTime="2026-01-26 14:49:21.141306258 +0000 UTC m=+157.826769363" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.154965 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:21 crc kubenswrapper[4823]: E0126 14:49:21.155137 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:21.655098942 +0000 UTC m=+158.340562047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.155178 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:21 crc kubenswrapper[4823]: E0126 14:49:21.155549 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:21.655539003 +0000 UTC m=+158.341002108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.172407 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg" event={"ID":"010c3f80-32bc-4a56-b1e9-7503e757192f","Type":"ContainerStarted","Data":"1b2a646b14f1a88fc337c877374aca9841ec677d9907666ec400ecb7c2aae0b2"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.177885 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" event={"ID":"a9bfdebe-6e6f-4a2c-baee-e339a0b4048d","Type":"ContainerStarted","Data":"41a7131060122b14d00cab9723e9c67fc9831ac83045ef671f6f90584bc6d9f6"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.178956 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.180511 4823 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-ngqjw container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.180961 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" podUID="a9bfdebe-6e6f-4a2c-baee-e339a0b4048d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.205083 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" event={"ID":"b8fcd1f9-ed8a-4659-889b-0ac463f9962d","Type":"ContainerStarted","Data":"25eaa7013f9663caac56c823dcc8039b4179f130a8f04d766a1c03666036aa3d"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.240079 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" event={"ID":"562f489c-010a-4bcf-9db6-524717e4c0eb","Type":"ContainerStarted","Data":"a5087132baac2ad06a93ef8a20cf89e22630e39ced68be199d7d007135f658b0"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.240140 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" event={"ID":"562f489c-010a-4bcf-9db6-524717e4c0eb","Type":"ContainerStarted","Data":"0ace4296e7eef365845082d0bcdc0dc9a1c978f23c40056618d89b055203a829"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.261007 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:21 crc kubenswrapper[4823]: E0126 14:49:21.262141 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:21.762124255 +0000 UTC m=+158.447587350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.277516 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" event={"ID":"99e64d4c-8fe7-4eec-ad1d-c10d740fccbb","Type":"ContainerStarted","Data":"3f1d42f8be8d4c8227b14b232b98bc4c97046c1ae71fa09cb9b59c38169b3df4"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.277571 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" event={"ID":"99e64d4c-8fe7-4eec-ad1d-c10d740fccbb","Type":"ContainerStarted","Data":"668c27a5ae81f14b4b7dd01fa912885a824d36733bf5f2b4efab45c5038ccbdd"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.278208 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.279259 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-z44wh" podStartSLOduration=134.279249976 podStartE2EDuration="2m14.279249976s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:21.204383232 +0000 UTC m=+157.889846337" watchObservedRunningTime="2026-01-26 14:49:21.279249976 +0000 UTC m=+157.964713081" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.300637 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zhskc" event={"ID":"029bd494-0ffa-4390-995e-bb26fdbbfbe7","Type":"ContainerStarted","Data":"29a18e13ad9ff457b0ebe9bcda02a2cb3278c12caf4ec6c90c0cea2160e3f879"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.301418 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.337564 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" event={"ID":"b61dff80-b5ca-454b-ae88-f45d20097560","Type":"ContainerStarted","Data":"48570ca646208d2f758b6d18311b71dd42f296907928b7db6bf34ae5f8add9d0"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.355814 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" event={"ID":"6f5fc5f2-5f01-40fa-85ad-1f98835115dc","Type":"ContainerStarted","Data":"ce8cbf316f052c387d6028851f6ab0449ba7d3dcf7fe61d63aab06fa6de37e45"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.358220 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" event={"ID":"1fce63dc-472e-4a08-b2c0-0228c9f41cc4","Type":"ContainerStarted","Data":"3f74c6d825291a315ea8faedb7608fdcbdd0a0518b383cc3dc5466326d2bb90a"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.362329 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.362924 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-fhzvg" podStartSLOduration=134.362908833 podStartE2EDuration="2m14.362908833s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:21.361304561 +0000 UTC m=+158.046767666" watchObservedRunningTime="2026-01-26 14:49:21.362908833 +0000 UTC m=+158.048371938" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.363113 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" podStartSLOduration=133.363108668 podStartE2EDuration="2m13.363108668s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:21.296042649 +0000 UTC m=+157.981505774" watchObservedRunningTime="2026-01-26 14:49:21.363108668 +0000 UTC m=+158.048571773" Jan 26 14:49:21 crc kubenswrapper[4823]: E0126 14:49:21.364290 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:21.86427916 +0000 UTC m=+158.549742265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.367654 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-km977" event={"ID":"7ac0ad9e-0a2a-4980-8d54-e9d0dfea3942","Type":"ContainerStarted","Data":"5cf2ed68dec02bf48396a072e3f652da5d07fcb569d389aa93f29d16c40baaca"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.376847 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-clfjm" event={"ID":"4f705cc6-53c8-4781-b33b-d0e5a386a22d","Type":"ContainerStarted","Data":"fd051c715da8e6cf5743bb5b529ddfc4472b94c4081bd44932abd1b75ae4eecd"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.384002 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" event={"ID":"b6eaabec-1376-4e26-898a-70d39fad7903","Type":"ContainerStarted","Data":"bff29065b3fd2ce34e321b8ccf348509a4b72544cca489fcac895003b805d6c0"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.384048 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.385540 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nnkhw" event={"ID":"c5857fd5-1c26-4ffd-a779-df738b7ad0b9","Type":"ContainerStarted","Data":"3a56ce369200b7b40eb03fb325eb58c8b445cee4102eba694d7b83ea79d41dad"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.386189 4823 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n5rd7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.386226 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" podUID="b6eaabec-1376-4e26-898a-70d39fad7903" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.387351 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" event={"ID":"e753db28-0960-4c2a-bd93-00e8cd25ad61","Type":"ContainerStarted","Data":"292841c284ad601039cb30fe622115298338588b85309aed3cab70d6db38e0d3"} Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.388614 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.388881 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.406638 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m7qhz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.406706 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" podUID="b83ec26f-28e8-400b-94f2-e8526e3c0cb3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.406824 4823 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5zb4r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.406841 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" podUID="e753db28-0960-4c2a-bd93-00e8cd25ad61" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.470444 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:21 crc kubenswrapper[4823]: E0126 14:49:21.472127 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:21.972109413 +0000 UTC m=+158.657572518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.578597 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:21 crc kubenswrapper[4823]: E0126 14:49:21.578942 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:22.078930371 +0000 UTC m=+158.764393476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.678315 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-w6x5c" podStartSLOduration=133.678293752 podStartE2EDuration="2m13.678293752s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:21.629818393 +0000 UTC m=+158.315281508" watchObservedRunningTime="2026-01-26 14:49:21.678293752 +0000 UTC m=+158.363756857" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.685021 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:21 crc kubenswrapper[4823]: E0126 14:49:21.685726 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:22.185692106 +0000 UTC m=+158.871155231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.693686 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:21 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:21 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:21 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.693751 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.791161 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:21 crc kubenswrapper[4823]: E0126 14:49:21.791552 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:22.291539188 +0000 UTC m=+158.977002293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.803677 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" podStartSLOduration=133.803658919 podStartE2EDuration="2m13.803658919s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:21.679237747 +0000 UTC m=+158.364700852" watchObservedRunningTime="2026-01-26 14:49:21.803658919 +0000 UTC m=+158.489122024" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.803826 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-g9nns" podStartSLOduration=133.803822683 podStartE2EDuration="2m13.803822683s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:21.801542103 +0000 UTC m=+158.487005208" watchObservedRunningTime="2026-01-26 14:49:21.803822683 +0000 UTC m=+158.489285788" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.879644 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zhskc" podStartSLOduration=11.879626382 podStartE2EDuration="11.879626382s" podCreationTimestamp="2026-01-26 14:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:21.878008329 +0000 UTC m=+158.563471434" watchObservedRunningTime="2026-01-26 14:49:21.879626382 +0000 UTC m=+158.565089487" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.893807 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:21 crc kubenswrapper[4823]: E0126 14:49:21.894195 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:22.394180176 +0000 UTC m=+159.079643281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.909149 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-vnwd2" podStartSLOduration=134.90913462 podStartE2EDuration="2m14.90913462s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:21.906750167 +0000 UTC m=+158.592213272" watchObservedRunningTime="2026-01-26 14:49:21.90913462 +0000 UTC m=+158.594597725" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.982230 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-dsmld" podStartSLOduration=134.982206377 podStartE2EDuration="2m14.982206377s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:21.952915415 +0000 UTC m=+158.638378520" watchObservedRunningTime="2026-01-26 14:49:21.982206377 +0000 UTC m=+158.667669482" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.983054 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" podStartSLOduration=133.98304607 podStartE2EDuration="2m13.98304607s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:21.979971309 +0000 UTC m=+158.665434424" watchObservedRunningTime="2026-01-26 14:49:21.98304607 +0000 UTC m=+158.668509175" Jan 26 14:49:21 crc kubenswrapper[4823]: I0126 14:49:21.996322 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:21 crc kubenswrapper[4823]: E0126 14:49:21.996821 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:22.496807893 +0000 UTC m=+159.182270998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.082225 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-km977" podStartSLOduration=134.082207805 podStartE2EDuration="2m14.082207805s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:22.074215654 +0000 UTC m=+158.759678759" watchObservedRunningTime="2026-01-26 14:49:22.082207805 +0000 UTC m=+158.767670910" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.082356 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" podStartSLOduration=135.082351249 podStartE2EDuration="2m15.082351249s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:22.033716846 +0000 UTC m=+158.719179951" watchObservedRunningTime="2026-01-26 14:49:22.082351249 +0000 UTC m=+158.767814354" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.098482 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:22 crc kubenswrapper[4823]: E0126 14:49:22.098663 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:22.598642949 +0000 UTC m=+159.284106054 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.098802 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:22 crc kubenswrapper[4823]: E0126 14:49:22.099138 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:22.599124821 +0000 UTC m=+159.284587926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.148054 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" podStartSLOduration=134.148032982 podStartE2EDuration="2m14.148032982s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:22.119490818 +0000 UTC m=+158.804953953" watchObservedRunningTime="2026-01-26 14:49:22.148032982 +0000 UTC m=+158.833496087" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.150690 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-nnkhw" podStartSLOduration=135.150675371 podStartE2EDuration="2m15.150675371s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:22.148769811 +0000 UTC m=+158.834232926" watchObservedRunningTime="2026-01-26 14:49:22.150675371 +0000 UTC m=+158.836138476" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.189473 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" podStartSLOduration=134.189452363 podStartE2EDuration="2m14.189452363s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:22.188862988 +0000 UTC m=+158.874326103" watchObservedRunningTime="2026-01-26 14:49:22.189452363 +0000 UTC m=+158.874915468" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.204869 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:22 crc kubenswrapper[4823]: E0126 14:49:22.205336 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:22.705318492 +0000 UTC m=+159.390781597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.270626 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" podStartSLOduration=134.270608804 podStartE2EDuration="2m14.270608804s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:22.228841492 +0000 UTC m=+158.914304617" watchObservedRunningTime="2026-01-26 14:49:22.270608804 +0000 UTC m=+158.956071909" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.272751 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-clfjm" podStartSLOduration=134.272741431 podStartE2EDuration="2m14.272741431s" podCreationTimestamp="2026-01-26 14:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:22.270378988 +0000 UTC m=+158.955842093" watchObservedRunningTime="2026-01-26 14:49:22.272741431 +0000 UTC m=+158.958204536" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.296631 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.296759 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.304978 4823 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-cdl85 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 14:49:22 crc kubenswrapper[4823]: [+]log ok Jan 26 14:49:22 crc kubenswrapper[4823]: [+]poststarthook/max-in-flight-filter ok Jan 26 14:49:22 crc kubenswrapper[4823]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Jan 26 14:49:22 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.305042 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" podUID="11ef8e8e-11b6-4db0-a9df-a4d9c2b4567a" containerName="openshift-config-operator" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.306590 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.306811 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cdl85" Jan 26 14:49:22 crc kubenswrapper[4823]: E0126 14:49:22.307003 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:22.806986444 +0000 UTC m=+159.492449549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.346860 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.348212 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.380916 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.410789 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:22 crc kubenswrapper[4823]: E0126 14:49:22.412350 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:22.912329462 +0000 UTC m=+159.597792567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.446929 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-gtqp8" event={"ID":"b61dff80-b5ca-454b-ae88-f45d20097560","Type":"ContainerStarted","Data":"9ad0ea8569a54bea33a6d0ba0a7663f729e72050d6d2e7d479ce1f334e91c7aa"} Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.469767 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-clfjm" event={"ID":"4f705cc6-53c8-4781-b33b-d0e5a386a22d","Type":"ContainerStarted","Data":"2c0f21049342489f5f1445a14e283e4bf99448722cad7c7b7237d84ab02f1040"} Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.473512 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" event={"ID":"189c2b61-53a3-4182-b251-2b8e6feddbcf","Type":"ContainerStarted","Data":"7d08ffd52dcd76096bb677f26180675b83ae64cf7ff77334c018ad280be79c53"} Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.485926 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.496286 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zhskc" event={"ID":"029bd494-0ffa-4390-995e-bb26fdbbfbe7","Type":"ContainerStarted","Data":"1b0485d6de304d6a92338b93dbe0bcadbe6ce4e8231c59830eab0c0704dd9ea6"} Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.497923 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m7qhz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.497965 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" podUID="b83ec26f-28e8-400b-94f2-e8526e3c0cb3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.497936 4823 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-5zb4r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" start-of-body= Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.498288 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" podUID="e753db28-0960-4c2a-bd93-00e8cd25ad61" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.18:5443/healthz\": dial tcp 10.217.0.18:5443: connect: connection refused" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.498261 4823 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n5rd7 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.498334 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" podUID="b6eaabec-1376-4e26-898a-70d39fad7903" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.503676 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.512003 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:22 crc kubenswrapper[4823]: E0126 14:49:22.514919 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:23.014901277 +0000 UTC m=+159.700364482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.525092 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-ngqjw" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.525140 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.525180 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mtvt4" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.525276 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.534278 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.534598 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.546885 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.546931 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.546998 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.547012 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.596279 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.596458 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.616677 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.617048 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1986f966-a6b6-4cc7-9916-c116f5e10e39-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1986f966-a6b6-4cc7-9916-c116f5e10e39\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.617923 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1986f966-a6b6-4cc7-9916-c116f5e10e39-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1986f966-a6b6-4cc7-9916-c116f5e10e39\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.621192 4823 patch_prober.go:28] interesting pod/console-f9d7485db-bbxp2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.621237 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bbxp2" podUID="ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 26 14:49:22 crc kubenswrapper[4823]: E0126 14:49:22.723433 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:23.223407678 +0000 UTC m=+159.908870783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.766684 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.852594 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1986f966-a6b6-4cc7-9916-c116f5e10e39-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1986f966-a6b6-4cc7-9916-c116f5e10e39\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.852713 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1986f966-a6b6-4cc7-9916-c116f5e10e39-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1986f966-a6b6-4cc7-9916-c116f5e10e39\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.852760 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.854634 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:22 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:22 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:22 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.854680 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:22 crc kubenswrapper[4823]: E0126 14:49:22.855156 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:23.355144222 +0000 UTC m=+160.040607327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.855591 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1986f966-a6b6-4cc7-9916-c116f5e10e39-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"1986f966-a6b6-4cc7-9916-c116f5e10e39\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.888385 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1986f966-a6b6-4cc7-9916-c116f5e10e39-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"1986f966-a6b6-4cc7-9916-c116f5e10e39\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.956044 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:22 crc kubenswrapper[4823]: E0126 14:49:22.956739 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:23.456714311 +0000 UTC m=+160.142177426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:22 crc kubenswrapper[4823]: I0126 14:49:22.956890 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:22 crc kubenswrapper[4823]: E0126 14:49:22.957287 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:23.457277415 +0000 UTC m=+160.142740580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.057513 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:23 crc kubenswrapper[4823]: E0126 14:49:23.057917 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:23.557902439 +0000 UTC m=+160.243365544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.159204 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:23 crc kubenswrapper[4823]: E0126 14:49:23.159609 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:23.659597592 +0000 UTC m=+160.345060697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.182895 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.262485 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:23 crc kubenswrapper[4823]: E0126 14:49:23.262764 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:23.762748682 +0000 UTC m=+160.448211787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.362162 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m7qhz container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.362217 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" podUID="b83ec26f-28e8-400b-94f2-e8526e3c0cb3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.362480 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m7qhz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.362500 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" podUID="b83ec26f-28e8-400b-94f2-e8526e3c0cb3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.363510 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:23 crc kubenswrapper[4823]: E0126 14:49:23.363957 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:23.863940272 +0000 UTC m=+160.549403437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.464845 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:23 crc kubenswrapper[4823]: E0126 14:49:23.465695 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:23.965678665 +0000 UTC m=+160.651141760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.567240 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:23 crc kubenswrapper[4823]: E0126 14:49:23.567645 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.067629394 +0000 UTC m=+160.753092509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.580574 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-m7qhz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.580620 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" podUID="b83ec26f-28e8-400b-94f2-e8526e3c0cb3" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.583153 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" event={"ID":"189c2b61-53a3-4182-b251-2b8e6feddbcf","Type":"ContainerStarted","Data":"e11fb53f0387bb7218673cef4a30dc8ae2e949e0e968fbbff86d077fe0b05ef6"} Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.674514 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:23 crc kubenswrapper[4823]: E0126 14:49:23.675353 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.175335915 +0000 UTC m=+160.860799020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.675650 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.676513 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5rd7" Jan 26 14:49:23 crc kubenswrapper[4823]: E0126 14:49:23.678954 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.17894592 +0000 UTC m=+160.864409025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.687561 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:23 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:23 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:23 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.687622 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.711912 4823 patch_prober.go:28] interesting pod/apiserver-76f77b778f-d8kxw container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 26 14:49:23 crc kubenswrapper[4823]: [+]log ok Jan 26 14:49:23 crc kubenswrapper[4823]: [+]etcd ok Jan 26 14:49:23 crc kubenswrapper[4823]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 26 14:49:23 crc kubenswrapper[4823]: [+]poststarthook/generic-apiserver-start-informers ok Jan 26 14:49:23 crc kubenswrapper[4823]: [+]poststarthook/max-in-flight-filter ok Jan 26 14:49:23 crc kubenswrapper[4823]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 26 14:49:23 crc kubenswrapper[4823]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 26 14:49:23 crc kubenswrapper[4823]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 26 14:49:23 crc kubenswrapper[4823]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 26 14:49:23 crc kubenswrapper[4823]: [+]poststarthook/project.openshift.io-projectcache ok Jan 26 14:49:23 crc kubenswrapper[4823]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 26 14:49:23 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-startinformers ok Jan 26 14:49:23 crc kubenswrapper[4823]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 26 14:49:23 crc kubenswrapper[4823]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 26 14:49:23 crc kubenswrapper[4823]: livez check failed Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.711970 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" podUID="b9b40dbc-c6c0-482d-9b0b-5b274a6d2e31" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.776854 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:23 crc kubenswrapper[4823]: E0126 14:49:23.778023 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.278007723 +0000 UTC m=+160.963470828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:23 crc kubenswrapper[4823]: E0126 14:49:23.879076 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.379063278 +0000 UTC m=+161.064526383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.879224 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:23 crc kubenswrapper[4823]: I0126 14:49:23.981858 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:23 crc kubenswrapper[4823]: E0126 14:49:23.982239 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.482225159 +0000 UTC m=+161.167688264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.084579 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:24 crc kubenswrapper[4823]: E0126 14:49:24.085992 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.585975346 +0000 UTC m=+161.271438451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.186844 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:24 crc kubenswrapper[4823]: E0126 14:49:24.187303 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.687284798 +0000 UTC m=+161.372747913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.292839 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:24 crc kubenswrapper[4823]: E0126 14:49:24.293454 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.793440318 +0000 UTC m=+161.478903423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.333721 4823 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.393816 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:24 crc kubenswrapper[4823]: E0126 14:49:24.394008 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.89398268 +0000 UTC m=+161.579445785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.394115 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:24 crc kubenswrapper[4823]: E0126 14:49:24.394489 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.894478103 +0000 UTC m=+161.579941208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.404820 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 26 14:49:24 crc kubenswrapper[4823]: W0126 14:49:24.444913 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod1986f966_a6b6_4cc7_9916_c116f5e10e39.slice/crio-96ac8b0b61b0529e6d79788fba9ddea49ad94a5d328e72dd268eb8da410d43de WatchSource:0}: Error finding container 96ac8b0b61b0529e6d79788fba9ddea49ad94a5d328e72dd268eb8da410d43de: Status 404 returned error can't find the container with id 96ac8b0b61b0529e6d79788fba9ddea49ad94a5d328e72dd268eb8da410d43de Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.495776 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:24 crc kubenswrapper[4823]: E0126 14:49:24.495949 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.995922099 +0000 UTC m=+161.681385214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.496019 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:24 crc kubenswrapper[4823]: E0126 14:49:24.496383 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:24.99635843 +0000 UTC m=+161.681821585 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.530802 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-5zb4r" Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.596657 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:24 crc kubenswrapper[4823]: E0126 14:49:24.596775 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:25.096751649 +0000 UTC m=+161.782214754 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.596949 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:24 crc kubenswrapper[4823]: E0126 14:49:24.597277 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:25.097269432 +0000 UTC m=+161.782732537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.629981 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1986f966-a6b6-4cc7-9916-c116f5e10e39","Type":"ContainerStarted","Data":"96ac8b0b61b0529e6d79788fba9ddea49ad94a5d328e72dd268eb8da410d43de"} Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.661203 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" event={"ID":"189c2b61-53a3-4182-b251-2b8e6feddbcf","Type":"ContainerStarted","Data":"99a14dd4cc4f2d42f8509eba35d8f176323b9da756faf74779d1c3e1dbfcca32"} Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.661244 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" event={"ID":"189c2b61-53a3-4182-b251-2b8e6feddbcf","Type":"ContainerStarted","Data":"17a7efad7eabdd7b2a34f8fbc02eee4633d93f52c2ee3d1818886711b9046d7e"} Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.680967 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:24 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:24 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:24 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.681028 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.698773 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:24 crc kubenswrapper[4823]: E0126 14:49:24.700179 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:25.200162036 +0000 UTC m=+161.885625141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.801042 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:24 crc kubenswrapper[4823]: E0126 14:49:24.801516 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-26 14:49:25.301496869 +0000 UTC m=+161.986960034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-pbvlk" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:24 crc kubenswrapper[4823]: I0126 14:49:24.953201 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:25 crc kubenswrapper[4823]: E0126 14:49:25.010722 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-26 14:49:25.453537889 +0000 UTC m=+162.139000994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.015729 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-hkfx2" podStartSLOduration=15.015711318 podStartE2EDuration="15.015711318s" podCreationTimestamp="2026-01-26 14:49:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:24.689044343 +0000 UTC m=+161.374507448" watchObservedRunningTime="2026-01-26 14:49:25.015711318 +0000 UTC m=+161.701174423" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.018902 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6jswn"] Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.022164 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.029480 4823 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-26T14:49:24.333748931Z","Handler":null,"Name":""} Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.029968 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.033556 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jswn"] Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.034528 4823 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.034726 4823 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.091762 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.094378 4823 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.094413 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.123440 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-plnn5"] Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.151452 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.172670 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-pbvlk\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.192612 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7642fa-63ff-41bb-950e-b0d1badff9fe-utilities\") pod \"community-operators-6jswn\" (UID: \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\") " pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.192697 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7642fa-63ff-41bb-950e-b0d1badff9fe-catalog-content\") pod \"community-operators-6jswn\" (UID: \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\") " pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.192784 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55txg\" (UniqueName: \"kubernetes.io/projected/0a7642fa-63ff-41bb-950e-b0d1badff9fe-kube-api-access-55txg\") pod \"community-operators-6jswn\" (UID: \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\") " pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.226307 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.237702 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-plnn5"] Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.301914 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.302463 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwkq8\" (UniqueName: \"kubernetes.io/projected/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-kube-api-access-rwkq8\") pod \"certified-operators-plnn5\" (UID: \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\") " pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.302497 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7642fa-63ff-41bb-950e-b0d1badff9fe-catalog-content\") pod \"community-operators-6jswn\" (UID: \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\") " pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.302529 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-utilities\") pod \"certified-operators-plnn5\" (UID: \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\") " pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.302548 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-catalog-content\") pod \"certified-operators-plnn5\" (UID: \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\") " pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.302585 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55txg\" (UniqueName: \"kubernetes.io/projected/0a7642fa-63ff-41bb-950e-b0d1badff9fe-kube-api-access-55txg\") pod \"community-operators-6jswn\" (UID: \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\") " pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.302621 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7642fa-63ff-41bb-950e-b0d1badff9fe-utilities\") pod \"community-operators-6jswn\" (UID: \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\") " pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.303139 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7642fa-63ff-41bb-950e-b0d1badff9fe-utilities\") pod \"community-operators-6jswn\" (UID: \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\") " pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.303522 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7642fa-63ff-41bb-950e-b0d1badff9fe-catalog-content\") pod \"community-operators-6jswn\" (UID: \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\") " pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.444271 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwkq8\" (UniqueName: \"kubernetes.io/projected/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-kube-api-access-rwkq8\") pod \"certified-operators-plnn5\" (UID: \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\") " pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.444375 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-utilities\") pod \"certified-operators-plnn5\" (UID: \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\") " pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.444396 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-catalog-content\") pod \"certified-operators-plnn5\" (UID: \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\") " pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.445000 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-utilities\") pod \"certified-operators-plnn5\" (UID: \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\") " pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.445390 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-catalog-content\") pod \"certified-operators-plnn5\" (UID: \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\") " pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.446162 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xcg28"] Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.447338 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.448113 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55txg\" (UniqueName: \"kubernetes.io/projected/0a7642fa-63ff-41bb-950e-b0d1badff9fe-kube-api-access-55txg\") pod \"community-operators-6jswn\" (UID: \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\") " pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.451034 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xcg28"] Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.451737 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.469298 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwkq8\" (UniqueName: \"kubernetes.io/projected/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-kube-api-access-rwkq8\") pod \"certified-operators-plnn5\" (UID: \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\") " pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.502275 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.527272 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wx77c"] Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.528790 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.534357 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wx77c"] Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.611158 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.620000 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.747389 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:25 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:25 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:25 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.747600 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.748058 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.748835 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68042e2f-4e4f-4953-bcdf-f5fa08e199de-catalog-content\") pod \"certified-operators-wx77c\" (UID: \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\") " pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.748898 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxllb\" (UniqueName: \"kubernetes.io/projected/68042e2f-4e4f-4953-bcdf-f5fa08e199de-kube-api-access-qxllb\") pod \"certified-operators-wx77c\" (UID: \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\") " pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.748932 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzclv\" (UniqueName: \"kubernetes.io/projected/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-kube-api-access-xzclv\") pod \"community-operators-xcg28\" (UID: \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\") " pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.749040 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-utilities\") pod \"community-operators-xcg28\" (UID: \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\") " pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.749074 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68042e2f-4e4f-4953-bcdf-f5fa08e199de-utilities\") pod \"certified-operators-wx77c\" (UID: \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\") " pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.749115 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-catalog-content\") pod \"community-operators-xcg28\" (UID: \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\") " pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.849793 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-utilities\") pod \"community-operators-xcg28\" (UID: \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\") " pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.849854 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68042e2f-4e4f-4953-bcdf-f5fa08e199de-utilities\") pod \"certified-operators-wx77c\" (UID: \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\") " pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.849881 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-catalog-content\") pod \"community-operators-xcg28\" (UID: \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\") " pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.849904 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68042e2f-4e4f-4953-bcdf-f5fa08e199de-catalog-content\") pod \"certified-operators-wx77c\" (UID: \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\") " pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.849925 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxllb\" (UniqueName: \"kubernetes.io/projected/68042e2f-4e4f-4953-bcdf-f5fa08e199de-kube-api-access-qxllb\") pod \"certified-operators-wx77c\" (UID: \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\") " pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.849953 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzclv\" (UniqueName: \"kubernetes.io/projected/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-kube-api-access-xzclv\") pod \"community-operators-xcg28\" (UID: \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\") " pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.850294 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-utilities\") pod \"community-operators-xcg28\" (UID: \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\") " pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.850515 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-catalog-content\") pod \"community-operators-xcg28\" (UID: \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\") " pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.850565 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68042e2f-4e4f-4953-bcdf-f5fa08e199de-catalog-content\") pod \"certified-operators-wx77c\" (UID: \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\") " pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:49:25 crc kubenswrapper[4823]: I0126 14:49:25.850747 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68042e2f-4e4f-4953-bcdf-f5fa08e199de-utilities\") pod \"certified-operators-wx77c\" (UID: \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\") " pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:49:26 crc kubenswrapper[4823]: I0126 14:49:26.021286 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxllb\" (UniqueName: \"kubernetes.io/projected/68042e2f-4e4f-4953-bcdf-f5fa08e199de-kube-api-access-qxllb\") pod \"certified-operators-wx77c\" (UID: \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\") " pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:49:26 crc kubenswrapper[4823]: I0126 14:49:26.111802 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzclv\" (UniqueName: \"kubernetes.io/projected/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-kube-api-access-xzclv\") pod \"community-operators-xcg28\" (UID: \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\") " pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:49:26 crc kubenswrapper[4823]: I0126 14:49:26.146314 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:49:26 crc kubenswrapper[4823]: I0126 14:49:26.392542 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:49:26 crc kubenswrapper[4823]: I0126 14:49:26.757523 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:26 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:26 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:26 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:26 crc kubenswrapper[4823]: I0126 14:49:26.757896 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.022207 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1986f966-a6b6-4cc7-9916-c116f5e10e39","Type":"ContainerStarted","Data":"80fe346104ce00482408b5eea5960757892133ae534f587cf2ec1fb644012951"} Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.190615 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m282g"] Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.202122 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.206982 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.209848 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=5.209828609 podStartE2EDuration="5.209828609s" podCreationTimestamp="2026-01-26 14:49:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:27.207397186 +0000 UTC m=+163.892860291" watchObservedRunningTime="2026-01-26 14:49:27.209828609 +0000 UTC m=+163.895291714" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.235486 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m282g"] Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.317057 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.334531 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-d8kxw" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.338093 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8k2h6"] Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.339597 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.349642 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k2h6"] Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.380258 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3efb7df4-2e94-4c83-a793-0fc25d69140e-catalog-content\") pod \"redhat-marketplace-m282g\" (UID: \"3efb7df4-2e94-4c83-a793-0fc25d69140e\") " pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.380377 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3efb7df4-2e94-4c83-a793-0fc25d69140e-utilities\") pod \"redhat-marketplace-m282g\" (UID: \"3efb7df4-2e94-4c83-a793-0fc25d69140e\") " pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.380440 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xhms\" (UniqueName: \"kubernetes.io/projected/3efb7df4-2e94-4c83-a793-0fc25d69140e-kube-api-access-7xhms\") pod \"redhat-marketplace-m282g\" (UID: \"3efb7df4-2e94-4c83-a793-0fc25d69140e\") " pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.392921 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pbvlk"] Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.434575 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-plnn5"] Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.482243 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3efb7df4-2e94-4c83-a793-0fc25d69140e-catalog-content\") pod \"redhat-marketplace-m282g\" (UID: \"3efb7df4-2e94-4c83-a793-0fc25d69140e\") " pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.482506 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9893d37e-1139-4f8c-974f-d82d38bb4014-utilities\") pod \"redhat-marketplace-8k2h6\" (UID: \"9893d37e-1139-4f8c-974f-d82d38bb4014\") " pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.482606 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3efb7df4-2e94-4c83-a793-0fc25d69140e-utilities\") pod \"redhat-marketplace-m282g\" (UID: \"3efb7df4-2e94-4c83-a793-0fc25d69140e\") " pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.482713 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9893d37e-1139-4f8c-974f-d82d38bb4014-catalog-content\") pod \"redhat-marketplace-8k2h6\" (UID: \"9893d37e-1139-4f8c-974f-d82d38bb4014\") " pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.482842 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mwkh\" (UniqueName: \"kubernetes.io/projected/9893d37e-1139-4f8c-974f-d82d38bb4014-kube-api-access-2mwkh\") pod \"redhat-marketplace-8k2h6\" (UID: \"9893d37e-1139-4f8c-974f-d82d38bb4014\") " pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.482913 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xhms\" (UniqueName: \"kubernetes.io/projected/3efb7df4-2e94-4c83-a793-0fc25d69140e-kube-api-access-7xhms\") pod \"redhat-marketplace-m282g\" (UID: \"3efb7df4-2e94-4c83-a793-0fc25d69140e\") " pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.485013 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3efb7df4-2e94-4c83-a793-0fc25d69140e-catalog-content\") pod \"redhat-marketplace-m282g\" (UID: \"3efb7df4-2e94-4c83-a793-0fc25d69140e\") " pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.485470 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3efb7df4-2e94-4c83-a793-0fc25d69140e-utilities\") pod \"redhat-marketplace-m282g\" (UID: \"3efb7df4-2e94-4c83-a793-0fc25d69140e\") " pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.604148 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9893d37e-1139-4f8c-974f-d82d38bb4014-utilities\") pod \"redhat-marketplace-8k2h6\" (UID: \"9893d37e-1139-4f8c-974f-d82d38bb4014\") " pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.604273 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9893d37e-1139-4f8c-974f-d82d38bb4014-catalog-content\") pod \"redhat-marketplace-8k2h6\" (UID: \"9893d37e-1139-4f8c-974f-d82d38bb4014\") " pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.604318 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mwkh\" (UniqueName: \"kubernetes.io/projected/9893d37e-1139-4f8c-974f-d82d38bb4014-kube-api-access-2mwkh\") pod \"redhat-marketplace-8k2h6\" (UID: \"9893d37e-1139-4f8c-974f-d82d38bb4014\") " pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.604884 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9893d37e-1139-4f8c-974f-d82d38bb4014-utilities\") pod \"redhat-marketplace-8k2h6\" (UID: \"9893d37e-1139-4f8c-974f-d82d38bb4014\") " pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.605279 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9893d37e-1139-4f8c-974f-d82d38bb4014-catalog-content\") pod \"redhat-marketplace-8k2h6\" (UID: \"9893d37e-1139-4f8c-974f-d82d38bb4014\") " pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.639916 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xhms\" (UniqueName: \"kubernetes.io/projected/3efb7df4-2e94-4c83-a793-0fc25d69140e-kube-api-access-7xhms\") pod \"redhat-marketplace-m282g\" (UID: \"3efb7df4-2e94-4c83-a793-0fc25d69140e\") " pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.645233 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wx77c"] Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.683575 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:27 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:27 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:27 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.684256 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:27 crc kubenswrapper[4823]: W0126 14:49:27.709843 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68042e2f_4e4f_4953_bcdf_f5fa08e199de.slice/crio-d2bcafccd532de2da0561e93e4cef039823bb85ce04c3e124b8bbf85727c964c WatchSource:0}: Error finding container d2bcafccd532de2da0561e93e4cef039823bb85ce04c3e124b8bbf85727c964c: Status 404 returned error can't find the container with id d2bcafccd532de2da0561e93e4cef039823bb85ce04c3e124b8bbf85727c964c Jan 26 14:49:27 crc kubenswrapper[4823]: I0126 14:49:27.715308 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mwkh\" (UniqueName: \"kubernetes.io/projected/9893d37e-1139-4f8c-974f-d82d38bb4014-kube-api-access-2mwkh\") pod \"redhat-marketplace-8k2h6\" (UID: \"9893d37e-1139-4f8c-974f-d82d38bb4014\") " pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:27.952462 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:27.968112 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.193318 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s6bkg"] Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.202543 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.205865 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xcg28"] Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.209850 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.227712 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s6bkg"] Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.228237 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6jswn"] Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.454922 4823 generic.go:334] "Generic (PLEG): container finished" podID="1c67988c-1152-41a0-8f2d-2d3a5eb12c46" containerID="8d9f5a5d2dea66e98bdbb18ec0c7f4c0619a6b9a041187cb89aeb36ae237e447" exitCode=0 Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.455044 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" event={"ID":"1c67988c-1152-41a0-8f2d-2d3a5eb12c46","Type":"ContainerDied","Data":"8d9f5a5d2dea66e98bdbb18ec0c7f4c0619a6b9a041187cb89aeb36ae237e447"} Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.469482 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.470219 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.479405 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.479602 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.498939 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc17803-10bb-4c3c-b89f-4ecb574c2092-catalog-content\") pod \"redhat-operators-s6bkg\" (UID: \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\") " pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.499022 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ctl9\" (UniqueName: \"kubernetes.io/projected/6cc17803-10bb-4c3c-b89f-4ecb574c2092-kube-api-access-9ctl9\") pod \"redhat-operators-s6bkg\" (UID: \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\") " pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.499047 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc17803-10bb-4c3c-b89f-4ecb574c2092-utilities\") pod \"redhat-operators-s6bkg\" (UID: \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\") " pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.499069 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b1386ff-8611-4c67-952f-d7fd8c7df053-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4b1386ff-8611-4c67-952f-d7fd8c7df053\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.499093 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b1386ff-8611-4c67-952f-d7fd8c7df053-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4b1386ff-8611-4c67-952f-d7fd8c7df053\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.499536 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plnn5" event={"ID":"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8","Type":"ContainerStarted","Data":"eaa893f18464459a71f3ce41ef8576ee5bc690675c2f1634aa121fc9d3bbc4f0"} Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.503822 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.537838 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" event={"ID":"aff73130-88e7-4a8b-9b78-9af559e12a71","Type":"ContainerStarted","Data":"936b81c3ce49f8c7885f66cc56b14a19d49250ed689497a605634550655a5fe4"} Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.572287 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx77c" event={"ID":"68042e2f-4e4f-4953-bcdf-f5fa08e199de","Type":"ContainerStarted","Data":"d2bcafccd532de2da0561e93e4cef039823bb85ce04c3e124b8bbf85727c964c"} Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.600283 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-psj4l"] Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.602611 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ctl9\" (UniqueName: \"kubernetes.io/projected/6cc17803-10bb-4c3c-b89f-4ecb574c2092-kube-api-access-9ctl9\") pod \"redhat-operators-s6bkg\" (UID: \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\") " pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.602685 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc17803-10bb-4c3c-b89f-4ecb574c2092-utilities\") pod \"redhat-operators-s6bkg\" (UID: \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\") " pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.602714 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b1386ff-8611-4c67-952f-d7fd8c7df053-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4b1386ff-8611-4c67-952f-d7fd8c7df053\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.602741 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b1386ff-8611-4c67-952f-d7fd8c7df053-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4b1386ff-8611-4c67-952f-d7fd8c7df053\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.602886 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc17803-10bb-4c3c-b89f-4ecb574c2092-catalog-content\") pod \"redhat-operators-s6bkg\" (UID: \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\") " pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.603743 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b1386ff-8611-4c67-952f-d7fd8c7df053-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4b1386ff-8611-4c67-952f-d7fd8c7df053\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.607495 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc17803-10bb-4c3c-b89f-4ecb574c2092-utilities\") pod \"redhat-operators-s6bkg\" (UID: \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\") " pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.608735 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc17803-10bb-4c3c-b89f-4ecb574c2092-catalog-content\") pod \"redhat-operators-s6bkg\" (UID: \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\") " pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.611570 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.635245 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ctl9\" (UniqueName: \"kubernetes.io/projected/6cc17803-10bb-4c3c-b89f-4ecb574c2092-kube-api-access-9ctl9\") pod \"redhat-operators-s6bkg\" (UID: \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\") " pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.645008 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b1386ff-8611-4c67-952f-d7fd8c7df053-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4b1386ff-8611-4c67-952f-d7fd8c7df053\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.664415 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-psj4l"] Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.703498 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nssqw\" (UniqueName: \"kubernetes.io/projected/66da9ec1-7863-4edb-8204-e0ea1812c556-kube-api-access-nssqw\") pod \"redhat-operators-psj4l\" (UID: \"66da9ec1-7863-4edb-8204-e0ea1812c556\") " pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.703584 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66da9ec1-7863-4edb-8204-e0ea1812c556-utilities\") pod \"redhat-operators-psj4l\" (UID: \"66da9ec1-7863-4edb-8204-e0ea1812c556\") " pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.703606 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66da9ec1-7863-4edb-8204-e0ea1812c556-catalog-content\") pod \"redhat-operators-psj4l\" (UID: \"66da9ec1-7863-4edb-8204-e0ea1812c556\") " pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.734400 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:28 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:28 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:28 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.734460 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.808752 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66da9ec1-7863-4edb-8204-e0ea1812c556-utilities\") pod \"redhat-operators-psj4l\" (UID: \"66da9ec1-7863-4edb-8204-e0ea1812c556\") " pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.808791 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66da9ec1-7863-4edb-8204-e0ea1812c556-catalog-content\") pod \"redhat-operators-psj4l\" (UID: \"66da9ec1-7863-4edb-8204-e0ea1812c556\") " pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.808848 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nssqw\" (UniqueName: \"kubernetes.io/projected/66da9ec1-7863-4edb-8204-e0ea1812c556-kube-api-access-nssqw\") pod \"redhat-operators-psj4l\" (UID: \"66da9ec1-7863-4edb-8204-e0ea1812c556\") " pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.810157 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66da9ec1-7863-4edb-8204-e0ea1812c556-utilities\") pod \"redhat-operators-psj4l\" (UID: \"66da9ec1-7863-4edb-8204-e0ea1812c556\") " pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.810202 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66da9ec1-7863-4edb-8204-e0ea1812c556-catalog-content\") pod \"redhat-operators-psj4l\" (UID: \"66da9ec1-7863-4edb-8204-e0ea1812c556\") " pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.828128 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.841359 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nssqw\" (UniqueName: \"kubernetes.io/projected/66da9ec1-7863-4edb-8204-e0ea1812c556-kube-api-access-nssqw\") pod \"redhat-operators-psj4l\" (UID: \"66da9ec1-7863-4edb-8204-e0ea1812c556\") " pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:49:28 crc kubenswrapper[4823]: I0126 14:49:28.896176 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.003975 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m282g"] Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.047758 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k2h6"] Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.128340 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.577811 4823 generic.go:334] "Generic (PLEG): container finished" podID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" containerID="f5603c70a203c298561e72ee2ac41cba5e3db21c9114bbbe6ecd58e51ac45d2c" exitCode=0 Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.577850 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plnn5" event={"ID":"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8","Type":"ContainerDied","Data":"f5603c70a203c298561e72ee2ac41cba5e3db21c9114bbbe6ecd58e51ac45d2c"} Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.579039 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcg28" event={"ID":"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8","Type":"ContainerStarted","Data":"f5c74124cf0821fa289b30944644b84e45763ad53fe42814b564bd5d1ed13cd4"} Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.580551 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" event={"ID":"aff73130-88e7-4a8b-9b78-9af559e12a71","Type":"ContainerStarted","Data":"960f7c08c0cf3de7396cba8b5ffb2dabf0cb595111009eb4aaa7836f3bf0ee8b"} Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.580704 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.581710 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx77c" event={"ID":"68042e2f-4e4f-4953-bcdf-f5fa08e199de","Type":"ContainerStarted","Data":"e3a6ab1b27a17a1aa2f26fb0a1a027b0af83c107cdd22b3cc78b95dc78bec766"} Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.582502 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jswn" event={"ID":"0a7642fa-63ff-41bb-950e-b0d1badff9fe","Type":"ContainerStarted","Data":"f16e97089fedfdbdf79575c6317a4c28ee1d33ab466828c211ff215631781a97"} Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.617744 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" podStartSLOduration=142.6177256 podStartE2EDuration="2m22.6177256s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:29.616684733 +0000 UTC m=+166.302147838" watchObservedRunningTime="2026-01-26 14:49:29.6177256 +0000 UTC m=+166.303188705" Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.682124 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:29 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:29 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:29 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.682518 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.930473 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:49:29 crc kubenswrapper[4823]: I0126 14:49:29.934491 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/35318be8-9029-4606-8a04-feec32098d9c-metrics-certs\") pod \"network-metrics-daemon-dh4f9\" (UID: \"35318be8-9029-4606-8a04-feec32098d9c\") " pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:49:30 crc kubenswrapper[4823]: I0126 14:49:30.105994 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-dh4f9" Jan 26 14:49:30 crc kubenswrapper[4823]: I0126 14:49:30.680325 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:30 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:30 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:30 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:30 crc kubenswrapper[4823]: I0126 14:49:30.680476 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.227338 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zhskc" Jan 26 14:49:31 crc kubenswrapper[4823]: W0126 14:49:31.506960 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3efb7df4_2e94_4c83_a793_0fc25d69140e.slice/crio-f4261a1c5bdd851ea3bd103f58bd5a9eec2c5b137e66aec6db2023259688d2dc WatchSource:0}: Error finding container f4261a1c5bdd851ea3bd103f58bd5a9eec2c5b137e66aec6db2023259688d2dc: Status 404 returned error can't find the container with id f4261a1c5bdd851ea3bd103f58bd5a9eec2c5b137e66aec6db2023259688d2dc Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.508241 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:49:31 crc kubenswrapper[4823]: W0126 14:49:31.511883 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9893d37e_1139_4f8c_974f_d82d38bb4014.slice/crio-80d737b5f9dab8e339445db0338915f9702dec7010f13f6db8c13983e879df97 WatchSource:0}: Error finding container 80d737b5f9dab8e339445db0338915f9702dec7010f13f6db8c13983e879df97: Status 404 returned error can't find the container with id 80d737b5f9dab8e339445db0338915f9702dec7010f13f6db8c13983e879df97 Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.682419 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:31 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:31 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:31 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.682801 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.729824 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k2h6" event={"ID":"9893d37e-1139-4f8c-974f-d82d38bb4014","Type":"ContainerStarted","Data":"80d737b5f9dab8e339445db0338915f9702dec7010f13f6db8c13983e879df97"} Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.743904 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.749285 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" event={"ID":"1c67988c-1152-41a0-8f2d-2d3a5eb12c46","Type":"ContainerDied","Data":"b612243016e5d7f486479b0f1f6e338dfc93e558907413d5a1365d338d3ea187"} Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.749329 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b612243016e5d7f486479b0f1f6e338dfc93e558907413d5a1365d338d3ea187" Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.851753 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m282g" event={"ID":"3efb7df4-2e94-4c83-a793-0fc25d69140e","Type":"ContainerStarted","Data":"f4261a1c5bdd851ea3bd103f58bd5a9eec2c5b137e66aec6db2023259688d2dc"} Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.853890 4823 generic.go:334] "Generic (PLEG): container finished" podID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" containerID="e3a6ab1b27a17a1aa2f26fb0a1a027b0af83c107cdd22b3cc78b95dc78bec766" exitCode=0 Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.853936 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx77c" event={"ID":"68042e2f-4e4f-4953-bcdf-f5fa08e199de","Type":"ContainerDied","Data":"e3a6ab1b27a17a1aa2f26fb0a1a027b0af83c107cdd22b3cc78b95dc78bec766"} Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.887111 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-config-volume\") pod \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\" (UID: \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\") " Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.887187 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-secret-volume\") pod \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\" (UID: \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\") " Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.887238 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k4c6\" (UniqueName: \"kubernetes.io/projected/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-kube-api-access-2k4c6\") pod \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\" (UID: \"1c67988c-1152-41a0-8f2d-2d3a5eb12c46\") " Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.895969 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-config-volume" (OuterVolumeSpecName: "config-volume") pod "1c67988c-1152-41a0-8f2d-2d3a5eb12c46" (UID: "1c67988c-1152-41a0-8f2d-2d3a5eb12c46"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.912005 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1c67988c-1152-41a0-8f2d-2d3a5eb12c46" (UID: "1c67988c-1152-41a0-8f2d-2d3a5eb12c46"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.919649 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-kube-api-access-2k4c6" (OuterVolumeSpecName: "kube-api-access-2k4c6") pod "1c67988c-1152-41a0-8f2d-2d3a5eb12c46" (UID: "1c67988c-1152-41a0-8f2d-2d3a5eb12c46"). InnerVolumeSpecName "kube-api-access-2k4c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.990460 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k4c6\" (UniqueName: \"kubernetes.io/projected/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-kube-api-access-2k4c6\") on node \"crc\" DevicePath \"\"" Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.990492 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:49:31 crc kubenswrapper[4823]: I0126 14:49:31.990518 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c67988c-1152-41a0-8f2d-2d3a5eb12c46-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.353110 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 26 14:49:32 crc kubenswrapper[4823]: W0126 14:49:32.487373 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4b1386ff_8611_4c67_952f_d7fd8c7df053.slice/crio-b59633eb0f5d39cc3e7cdc49348df4470b9e7b897e88b4b50baa5f581262cfb7 WatchSource:0}: Error finding container b59633eb0f5d39cc3e7cdc49348df4470b9e7b897e88b4b50baa5f581262cfb7: Status 404 returned error can't find the container with id b59633eb0f5d39cc3e7cdc49348df4470b9e7b897e88b4b50baa5f581262cfb7 Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.543269 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.543632 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.543297 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.543961 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.594436 4823 patch_prober.go:28] interesting pod/console-f9d7485db-bbxp2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.594489 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bbxp2" podUID="ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" containerName="console" probeResult="failure" output="Get \"https://10.217.0.25:8443/health\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.627098 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s6bkg"] Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.632876 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-dh4f9"] Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.682145 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:32 crc kubenswrapper[4823]: [-]has-synced failed: reason withheld Jan 26 14:49:32 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:32 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.682206 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.697502 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-psj4l"] Jan 26 14:49:32 crc kubenswrapper[4823]: W0126 14:49:32.729166 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6cc17803_10bb_4c3c_b89f_4ecb574c2092.slice/crio-09322e77600cdacde6080ac23de2579f3715f738bd060980582d87b49a4f6443 WatchSource:0}: Error finding container 09322e77600cdacde6080ac23de2579f3715f738bd060980582d87b49a4f6443: Status 404 returned error can't find the container with id 09322e77600cdacde6080ac23de2579f3715f738bd060980582d87b49a4f6443 Jan 26 14:49:32 crc kubenswrapper[4823]: W0126 14:49:32.733232 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66da9ec1_7863_4edb_8204_e0ea1812c556.slice/crio-fe1c9fc9a70d4499dedac4a2b5bf5f3d579b099e61795e6d81b5b23a124a3161 WatchSource:0}: Error finding container fe1c9fc9a70d4499dedac4a2b5bf5f3d579b099e61795e6d81b5b23a124a3161: Status 404 returned error can't find the container with id fe1c9fc9a70d4499dedac4a2b5bf5f3d579b099e61795e6d81b5b23a124a3161 Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.868324 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psj4l" event={"ID":"66da9ec1-7863-4edb-8204-e0ea1812c556","Type":"ContainerStarted","Data":"fe1c9fc9a70d4499dedac4a2b5bf5f3d579b099e61795e6d81b5b23a124a3161"} Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.869907 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" event={"ID":"35318be8-9029-4606-8a04-feec32098d9c","Type":"ContainerStarted","Data":"caf9b7c1da12d3d8007b73f0d642cb9b67cec0bdaad0c2c8d68b76f50f6eddae"} Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.872611 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4b1386ff-8611-4c67-952f-d7fd8c7df053","Type":"ContainerStarted","Data":"b59633eb0f5d39cc3e7cdc49348df4470b9e7b897e88b4b50baa5f581262cfb7"} Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.879711 4823 generic.go:334] "Generic (PLEG): container finished" podID="3efb7df4-2e94-4c83-a793-0fc25d69140e" containerID="846b506c694bb6e7fb0e2ad921fdbf3d062d23bab25e22c8e473107a0ea89b76" exitCode=0 Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.879790 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m282g" event={"ID":"3efb7df4-2e94-4c83-a793-0fc25d69140e","Type":"ContainerDied","Data":"846b506c694bb6e7fb0e2ad921fdbf3d062d23bab25e22c8e473107a0ea89b76"} Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.881994 4823 generic.go:334] "Generic (PLEG): container finished" podID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" containerID="a69ded64b41702448bd69868fb0a7e26392896add97153e94e8b57e8d7b43942" exitCode=0 Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.882041 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jswn" event={"ID":"0a7642fa-63ff-41bb-950e-b0d1badff9fe","Type":"ContainerDied","Data":"a69ded64b41702448bd69868fb0a7e26392896add97153e94e8b57e8d7b43942"} Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.884716 4823 generic.go:334] "Generic (PLEG): container finished" podID="9893d37e-1139-4f8c-974f-d82d38bb4014" containerID="5ae1462da52b15580af40143b17af3ab876765f41928002f93a6b4211cc947da" exitCode=0 Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.884765 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k2h6" event={"ID":"9893d37e-1139-4f8c-974f-d82d38bb4014","Type":"ContainerDied","Data":"5ae1462da52b15580af40143b17af3ab876765f41928002f93a6b4211cc947da"} Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.890723 4823 generic.go:334] "Generic (PLEG): container finished" podID="1986f966-a6b6-4cc7-9916-c116f5e10e39" containerID="80fe346104ce00482408b5eea5960757892133ae534f587cf2ec1fb644012951" exitCode=0 Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.890802 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1986f966-a6b6-4cc7-9916-c116f5e10e39","Type":"ContainerDied","Data":"80fe346104ce00482408b5eea5960757892133ae534f587cf2ec1fb644012951"} Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.899601 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6bkg" event={"ID":"6cc17803-10bb-4c3c-b89f-4ecb574c2092","Type":"ContainerStarted","Data":"09322e77600cdacde6080ac23de2579f3715f738bd060980582d87b49a4f6443"} Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.941158 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcg28" event={"ID":"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8","Type":"ContainerDied","Data":"fb6d5297933ca50ad8650763cf23a84211f38066ea904200f4142e3746460cd5"} Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.940909 4823 generic.go:334] "Generic (PLEG): container finished" podID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" containerID="fb6d5297933ca50ad8650763cf23a84211f38066ea904200f4142e3746460cd5" exitCode=0 Jan 26 14:49:32 crc kubenswrapper[4823]: I0126 14:49:32.946777 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54" Jan 26 14:49:33 crc kubenswrapper[4823]: I0126 14:49:33.363624 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:49:33 crc kubenswrapper[4823]: I0126 14:49:33.706873 4823 patch_prober.go:28] interesting pod/router-default-5444994796-p7srw container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 26 14:49:33 crc kubenswrapper[4823]: [+]has-synced ok Jan 26 14:49:33 crc kubenswrapper[4823]: [+]process-running ok Jan 26 14:49:33 crc kubenswrapper[4823]: healthz check failed Jan 26 14:49:33 crc kubenswrapper[4823]: I0126 14:49:33.706929 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-p7srw" podUID="18f7273c-10d0-4c81-878f-d2ac07b0fb63" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 26 14:49:33 crc kubenswrapper[4823]: I0126 14:49:33.988142 4823 generic.go:334] "Generic (PLEG): container finished" podID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" containerID="b7fa4463ad97977c954e961d6e827e35303c8cdc139b8653e2a3076d232ab766" exitCode=0 Jan 26 14:49:33 crc kubenswrapper[4823]: I0126 14:49:33.988236 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6bkg" event={"ID":"6cc17803-10bb-4c3c-b89f-4ecb574c2092","Type":"ContainerDied","Data":"b7fa4463ad97977c954e961d6e827e35303c8cdc139b8653e2a3076d232ab766"} Jan 26 14:49:33 crc kubenswrapper[4823]: I0126 14:49:33.991943 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4b1386ff-8611-4c67-952f-d7fd8c7df053","Type":"ContainerStarted","Data":"629fc21ab363c08efa827cf287de85cd15230619411cc09785e833c037351f02"} Jan 26 14:49:33 crc kubenswrapper[4823]: I0126 14:49:33.995434 4823 generic.go:334] "Generic (PLEG): container finished" podID="66da9ec1-7863-4edb-8204-e0ea1812c556" containerID="aa8e7c99a1073bb6b6747c7bea81aa03050184dc67d4665efbcb87917641f4b2" exitCode=0 Jan 26 14:49:33 crc kubenswrapper[4823]: I0126 14:49:33.995502 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psj4l" event={"ID":"66da9ec1-7863-4edb-8204-e0ea1812c556","Type":"ContainerDied","Data":"aa8e7c99a1073bb6b6747c7bea81aa03050184dc67d4665efbcb87917641f4b2"} Jan 26 14:49:34 crc kubenswrapper[4823]: I0126 14:49:34.012997 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" event={"ID":"35318be8-9029-4606-8a04-feec32098d9c","Type":"ContainerStarted","Data":"eb5a925d2fdb114e6e09a589ba523dc29ced45e71a63d4baf3ddb299ff5408a3"} Jan 26 14:49:34 crc kubenswrapper[4823]: I0126 14:49:34.508916 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:49:34 crc kubenswrapper[4823]: I0126 14:49:34.508983 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:49:34 crc kubenswrapper[4823]: I0126 14:49:34.681809 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:34 crc kubenswrapper[4823]: I0126 14:49:34.695012 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-p7srw" Jan 26 14:49:34 crc kubenswrapper[4823]: I0126 14:49:34.706251 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=6.706226303 podStartE2EDuration="6.706226303s" podCreationTimestamp="2026-01-26 14:49:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:34.137700527 +0000 UTC m=+170.823163642" watchObservedRunningTime="2026-01-26 14:49:34.706226303 +0000 UTC m=+171.391689408" Jan 26 14:49:34 crc kubenswrapper[4823]: I0126 14:49:34.867713 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:49:34 crc kubenswrapper[4823]: I0126 14:49:34.961684 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1986f966-a6b6-4cc7-9916-c116f5e10e39-kubelet-dir\") pod \"1986f966-a6b6-4cc7-9916-c116f5e10e39\" (UID: \"1986f966-a6b6-4cc7-9916-c116f5e10e39\") " Jan 26 14:49:34 crc kubenswrapper[4823]: I0126 14:49:34.961924 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1986f966-a6b6-4cc7-9916-c116f5e10e39-kube-api-access\") pod \"1986f966-a6b6-4cc7-9916-c116f5e10e39\" (UID: \"1986f966-a6b6-4cc7-9916-c116f5e10e39\") " Jan 26 14:49:34 crc kubenswrapper[4823]: I0126 14:49:34.961916 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1986f966-a6b6-4cc7-9916-c116f5e10e39-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1986f966-a6b6-4cc7-9916-c116f5e10e39" (UID: "1986f966-a6b6-4cc7-9916-c116f5e10e39"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:49:34 crc kubenswrapper[4823]: I0126 14:49:34.962212 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1986f966-a6b6-4cc7-9916-c116f5e10e39-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:49:34 crc kubenswrapper[4823]: I0126 14:49:34.982754 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1986f966-a6b6-4cc7-9916-c116f5e10e39-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1986f966-a6b6-4cc7-9916-c116f5e10e39" (UID: "1986f966-a6b6-4cc7-9916-c116f5e10e39"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:49:35 crc kubenswrapper[4823]: I0126 14:49:35.065608 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1986f966-a6b6-4cc7-9916-c116f5e10e39-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:49:35 crc kubenswrapper[4823]: I0126 14:49:35.070964 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-dh4f9" event={"ID":"35318be8-9029-4606-8a04-feec32098d9c","Type":"ContainerStarted","Data":"2aabbb76cb7168d19582f64600601dbe38f02b48af7bdf76546a925adfb35861"} Jan 26 14:49:35 crc kubenswrapper[4823]: I0126 14:49:35.074821 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 26 14:49:35 crc kubenswrapper[4823]: I0126 14:49:35.074923 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"1986f966-a6b6-4cc7-9916-c116f5e10e39","Type":"ContainerDied","Data":"96ac8b0b61b0529e6d79788fba9ddea49ad94a5d328e72dd268eb8da410d43de"} Jan 26 14:49:35 crc kubenswrapper[4823]: I0126 14:49:35.075036 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96ac8b0b61b0529e6d79788fba9ddea49ad94a5d328e72dd268eb8da410d43de" Jan 26 14:49:36 crc kubenswrapper[4823]: I0126 14:49:36.158064 4823 generic.go:334] "Generic (PLEG): container finished" podID="4b1386ff-8611-4c67-952f-d7fd8c7df053" containerID="629fc21ab363c08efa827cf287de85cd15230619411cc09785e833c037351f02" exitCode=0 Jan 26 14:49:36 crc kubenswrapper[4823]: I0126 14:49:36.159087 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4b1386ff-8611-4c67-952f-d7fd8c7df053","Type":"ContainerDied","Data":"629fc21ab363c08efa827cf287de85cd15230619411cc09785e833c037351f02"} Jan 26 14:49:36 crc kubenswrapper[4823]: I0126 14:49:36.187340 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-dh4f9" podStartSLOduration=149.187317798 podStartE2EDuration="2m29.187317798s" podCreationTimestamp="2026-01-26 14:47:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:49:35.101633572 +0000 UTC m=+171.787096757" watchObservedRunningTime="2026-01-26 14:49:36.187317798 +0000 UTC m=+172.872780903" Jan 26 14:49:37 crc kubenswrapper[4823]: I0126 14:49:37.744943 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:49:37 crc kubenswrapper[4823]: I0126 14:49:37.824103 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b1386ff-8611-4c67-952f-d7fd8c7df053-kube-api-access\") pod \"4b1386ff-8611-4c67-952f-d7fd8c7df053\" (UID: \"4b1386ff-8611-4c67-952f-d7fd8c7df053\") " Jan 26 14:49:37 crc kubenswrapper[4823]: I0126 14:49:37.824267 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b1386ff-8611-4c67-952f-d7fd8c7df053-kubelet-dir\") pod \"4b1386ff-8611-4c67-952f-d7fd8c7df053\" (UID: \"4b1386ff-8611-4c67-952f-d7fd8c7df053\") " Jan 26 14:49:37 crc kubenswrapper[4823]: I0126 14:49:37.824469 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4b1386ff-8611-4c67-952f-d7fd8c7df053-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4b1386ff-8611-4c67-952f-d7fd8c7df053" (UID: "4b1386ff-8611-4c67-952f-d7fd8c7df053"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:49:37 crc kubenswrapper[4823]: I0126 14:49:37.824710 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b1386ff-8611-4c67-952f-d7fd8c7df053-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:49:37 crc kubenswrapper[4823]: I0126 14:49:37.832280 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b1386ff-8611-4c67-952f-d7fd8c7df053-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4b1386ff-8611-4c67-952f-d7fd8c7df053" (UID: "4b1386ff-8611-4c67-952f-d7fd8c7df053"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:49:37 crc kubenswrapper[4823]: I0126 14:49:37.926678 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4b1386ff-8611-4c67-952f-d7fd8c7df053-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:49:38 crc kubenswrapper[4823]: I0126 14:49:38.177329 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4b1386ff-8611-4c67-952f-d7fd8c7df053","Type":"ContainerDied","Data":"b59633eb0f5d39cc3e7cdc49348df4470b9e7b897e88b4b50baa5f581262cfb7"} Jan 26 14:49:38 crc kubenswrapper[4823]: I0126 14:49:38.177392 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b59633eb0f5d39cc3e7cdc49348df4470b9e7b897e88b4b50baa5f581262cfb7" Jan 26 14:49:38 crc kubenswrapper[4823]: I0126 14:49:38.177450 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 26 14:49:42 crc kubenswrapper[4823]: I0126 14:49:42.534076 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:49:42 crc kubenswrapper[4823]: I0126 14:49:42.535006 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:49:42 crc kubenswrapper[4823]: I0126 14:49:42.534076 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:49:42 crc kubenswrapper[4823]: I0126 14:49:42.535128 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:49:42 crc kubenswrapper[4823]: I0126 14:49:42.535190 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-5b7zm" Jan 26 14:49:42 crc kubenswrapper[4823]: I0126 14:49:42.535997 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"ba43dd26a73790abb8822133aa68c3d720b150e646e5fb6df3567b35f24c5465"} pod="openshift-console/downloads-7954f5f757-5b7zm" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 26 14:49:42 crc kubenswrapper[4823]: I0126 14:49:42.536087 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" containerID="cri-o://ba43dd26a73790abb8822133aa68c3d720b150e646e5fb6df3567b35f24c5465" gracePeriod=2 Jan 26 14:49:42 crc kubenswrapper[4823]: I0126 14:49:42.537040 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:49:42 crc kubenswrapper[4823]: I0126 14:49:42.537074 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:49:42 crc kubenswrapper[4823]: I0126 14:49:42.598524 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:42 crc kubenswrapper[4823]: I0126 14:49:42.601993 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 14:49:44 crc kubenswrapper[4823]: I0126 14:49:44.318394 4823 generic.go:334] "Generic (PLEG): container finished" podID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerID="ba43dd26a73790abb8822133aa68c3d720b150e646e5fb6df3567b35f24c5465" exitCode=0 Jan 26 14:49:44 crc kubenswrapper[4823]: I0126 14:49:44.318605 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5b7zm" event={"ID":"4609bcb4-b5ef-43fa-85be-2d897f635951","Type":"ContainerDied","Data":"ba43dd26a73790abb8822133aa68c3d720b150e646e5fb6df3567b35f24c5465"} Jan 26 14:49:45 crc kubenswrapper[4823]: I0126 14:49:45.540786 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:49:51 crc kubenswrapper[4823]: I0126 14:49:51.118902 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 26 14:49:52 crc kubenswrapper[4823]: I0126 14:49:52.533779 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:49:52 crc kubenswrapper[4823]: I0126 14:49:52.534117 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:49:53 crc kubenswrapper[4823]: I0126 14:49:53.332348 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qs9mc" Jan 26 14:50:02 crc kubenswrapper[4823]: I0126 14:50:02.533666 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:50:02 crc kubenswrapper[4823]: I0126 14:50:02.534155 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:50:04 crc kubenswrapper[4823]: I0126 14:50:04.508450 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:50:04 crc kubenswrapper[4823]: I0126 14:50:04.508784 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.010887 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 14:50:05 crc kubenswrapper[4823]: E0126 14:50:05.011148 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1386ff-8611-4c67-952f-d7fd8c7df053" containerName="pruner" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.011164 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1386ff-8611-4c67-952f-d7fd8c7df053" containerName="pruner" Jan 26 14:50:05 crc kubenswrapper[4823]: E0126 14:50:05.011176 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c67988c-1152-41a0-8f2d-2d3a5eb12c46" containerName="collect-profiles" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.011182 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c67988c-1152-41a0-8f2d-2d3a5eb12c46" containerName="collect-profiles" Jan 26 14:50:05 crc kubenswrapper[4823]: E0126 14:50:05.011195 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1986f966-a6b6-4cc7-9916-c116f5e10e39" containerName="pruner" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.011200 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1986f966-a6b6-4cc7-9916-c116f5e10e39" containerName="pruner" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.011297 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c67988c-1152-41a0-8f2d-2d3a5eb12c46" containerName="collect-profiles" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.011309 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1986f966-a6b6-4cc7-9916-c116f5e10e39" containerName="pruner" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.011323 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1386ff-8611-4c67-952f-d7fd8c7df053" containerName="pruner" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.011748 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.015863 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.015888 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.027928 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.124395 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aec39c1c-744d-4fc8-b844-fd3bbfa1acc4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.124794 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aec39c1c-744d-4fc8-b844-fd3bbfa1acc4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.225993 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aec39c1c-744d-4fc8-b844-fd3bbfa1acc4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.226324 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aec39c1c-744d-4fc8-b844-fd3bbfa1acc4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.226535 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aec39c1c-744d-4fc8-b844-fd3bbfa1acc4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.244199 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aec39c1c-744d-4fc8-b844-fd3bbfa1acc4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:50:05 crc kubenswrapper[4823]: I0126 14:50:05.359457 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.407433 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.408749 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.420088 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.499054 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c20242eb-5d18-4aed-8862-4d000031d3e9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c20242eb-5d18-4aed-8862-4d000031d3e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.499109 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c20242eb-5d18-4aed-8862-4d000031d3e9-kube-api-access\") pod \"installer-9-crc\" (UID: \"c20242eb-5d18-4aed-8862-4d000031d3e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.499273 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c20242eb-5d18-4aed-8862-4d000031d3e9-var-lock\") pod \"installer-9-crc\" (UID: \"c20242eb-5d18-4aed-8862-4d000031d3e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.600863 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c20242eb-5d18-4aed-8862-4d000031d3e9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c20242eb-5d18-4aed-8862-4d000031d3e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.600909 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c20242eb-5d18-4aed-8862-4d000031d3e9-kube-api-access\") pod \"installer-9-crc\" (UID: \"c20242eb-5d18-4aed-8862-4d000031d3e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.600974 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c20242eb-5d18-4aed-8862-4d000031d3e9-var-lock\") pod \"installer-9-crc\" (UID: \"c20242eb-5d18-4aed-8862-4d000031d3e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.601004 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c20242eb-5d18-4aed-8862-4d000031d3e9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"c20242eb-5d18-4aed-8862-4d000031d3e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.601061 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c20242eb-5d18-4aed-8862-4d000031d3e9-var-lock\") pod \"installer-9-crc\" (UID: \"c20242eb-5d18-4aed-8862-4d000031d3e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.623627 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c20242eb-5d18-4aed-8862-4d000031d3e9-kube-api-access\") pod \"installer-9-crc\" (UID: \"c20242eb-5d18-4aed-8862-4d000031d3e9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:50:10 crc kubenswrapper[4823]: I0126 14:50:10.753178 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:50:12 crc kubenswrapper[4823]: I0126 14:50:12.535757 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:50:12 crc kubenswrapper[4823]: I0126 14:50:12.536334 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:50:14 crc kubenswrapper[4823]: E0126 14:50:14.643928 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 14:50:14 crc kubenswrapper[4823]: E0126 14:50:14.648505 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ctl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-s6bkg_openshift-marketplace(6cc17803-10bb-4c3c-b89f-4ecb574c2092): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:50:14 crc kubenswrapper[4823]: E0126 14:50:14.649762 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-s6bkg" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" Jan 26 14:50:14 crc kubenswrapper[4823]: E0126 14:50:14.698418 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 26 14:50:14 crc kubenswrapper[4823]: E0126 14:50:14.698644 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nssqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-psj4l_openshift-marketplace(66da9ec1-7863-4edb-8204-e0ea1812c556): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:50:14 crc kubenswrapper[4823]: E0126 14:50:14.700398 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-psj4l" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" Jan 26 14:50:16 crc kubenswrapper[4823]: E0126 14:50:16.429783 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-s6bkg" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" Jan 26 14:50:16 crc kubenswrapper[4823]: E0126 14:50:16.429832 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-psj4l" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" Jan 26 14:50:16 crc kubenswrapper[4823]: E0126 14:50:16.508200 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 14:50:16 crc kubenswrapper[4823]: E0126 14:50:16.508479 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xzclv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-xcg28_openshift-marketplace(4b0581ed-2fde-46ba-ae27-24b18e0e7ea8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:50:16 crc kubenswrapper[4823]: E0126 14:50:16.509820 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-xcg28" podUID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" Jan 26 14:50:18 crc kubenswrapper[4823]: E0126 14:50:18.106983 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 14:50:18 crc kubenswrapper[4823]: E0126 14:50:18.107179 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qxllb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-wx77c_openshift-marketplace(68042e2f-4e4f-4953-bcdf-f5fa08e199de): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:50:18 crc kubenswrapper[4823]: E0126 14:50:18.109101 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-wx77c" podUID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" Jan 26 14:50:18 crc kubenswrapper[4823]: E0126 14:50:18.130527 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 26 14:50:18 crc kubenswrapper[4823]: E0126 14:50:18.131110 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55txg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6jswn_openshift-marketplace(0a7642fa-63ff-41bb-950e-b0d1badff9fe): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:50:18 crc kubenswrapper[4823]: E0126 14:50:18.132283 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6jswn" podUID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" Jan 26 14:50:18 crc kubenswrapper[4823]: E0126 14:50:18.135244 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 26 14:50:18 crc kubenswrapper[4823]: E0126 14:50:18.135640 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rwkq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-plnn5_openshift-marketplace(a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 14:50:18 crc kubenswrapper[4823]: E0126 14:50:18.137333 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-plnn5" podUID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" Jan 26 14:50:18 crc kubenswrapper[4823]: I0126 14:50:18.523485 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 26 14:50:18 crc kubenswrapper[4823]: W0126 14:50:18.526854 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podc20242eb_5d18_4aed_8862_4d000031d3e9.slice/crio-2c72c78e35aa9786a2e452edc07bd360c763c94c42c88dfdb3946ca4a3d702da WatchSource:0}: Error finding container 2c72c78e35aa9786a2e452edc07bd360c763c94c42c88dfdb3946ca4a3d702da: Status 404 returned error can't find the container with id 2c72c78e35aa9786a2e452edc07bd360c763c94c42c88dfdb3946ca4a3d702da Jan 26 14:50:18 crc kubenswrapper[4823]: I0126 14:50:18.593926 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 26 14:50:18 crc kubenswrapper[4823]: W0126 14:50:18.600828 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaec39c1c_744d_4fc8_b844_fd3bbfa1acc4.slice/crio-ed6e9118f87980b4d12651db915b4931fae98ddb28f537c708ed7a17e6c08893 WatchSource:0}: Error finding container ed6e9118f87980b4d12651db915b4931fae98ddb28f537c708ed7a17e6c08893: Status 404 returned error can't find the container with id ed6e9118f87980b4d12651db915b4931fae98ddb28f537c708ed7a17e6c08893 Jan 26 14:50:18 crc kubenswrapper[4823]: I0126 14:50:18.605066 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5b7zm" event={"ID":"4609bcb4-b5ef-43fa-85be-2d897f635951","Type":"ContainerStarted","Data":"34b378df0eb12b71cfffd760aa7c148acf6d416cfd2164dd2191243c4fe36b3a"} Jan 26 14:50:18 crc kubenswrapper[4823]: I0126 14:50:18.605650 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5b7zm" Jan 26 14:50:18 crc kubenswrapper[4823]: I0126 14:50:18.605710 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:50:18 crc kubenswrapper[4823]: I0126 14:50:18.605784 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:50:18 crc kubenswrapper[4823]: I0126 14:50:18.616121 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k2h6" event={"ID":"9893d37e-1139-4f8c-974f-d82d38bb4014","Type":"ContainerStarted","Data":"40b2e06d1d64351ea426a7113320e31888648e19e9eb32e14af9a80b1f0a4b5c"} Jan 26 14:50:18 crc kubenswrapper[4823]: I0126 14:50:18.621292 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c20242eb-5d18-4aed-8862-4d000031d3e9","Type":"ContainerStarted","Data":"2c72c78e35aa9786a2e452edc07bd360c763c94c42c88dfdb3946ca4a3d702da"} Jan 26 14:50:18 crc kubenswrapper[4823]: I0126 14:50:18.631580 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m282g" event={"ID":"3efb7df4-2e94-4c83-a793-0fc25d69140e","Type":"ContainerStarted","Data":"143aa343ed1d4c2a3acea6d554228e7c338fa875de7979e1b4b871e9a6c17eef"} Jan 26 14:50:18 crc kubenswrapper[4823]: E0126 14:50:18.635828 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-plnn5" podUID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" Jan 26 14:50:18 crc kubenswrapper[4823]: E0126 14:50:18.638488 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6jswn" podUID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" Jan 26 14:50:19 crc kubenswrapper[4823]: I0126 14:50:19.648947 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4","Type":"ContainerStarted","Data":"ed6e9118f87980b4d12651db915b4931fae98ddb28f537c708ed7a17e6c08893"} Jan 26 14:50:19 crc kubenswrapper[4823]: I0126 14:50:19.652907 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:50:19 crc kubenswrapper[4823]: I0126 14:50:19.652999 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:50:20 crc kubenswrapper[4823]: I0126 14:50:20.658113 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k2h6" event={"ID":"9893d37e-1139-4f8c-974f-d82d38bb4014","Type":"ContainerDied","Data":"40b2e06d1d64351ea426a7113320e31888648e19e9eb32e14af9a80b1f0a4b5c"} Jan 26 14:50:20 crc kubenswrapper[4823]: I0126 14:50:20.658054 4823 generic.go:334] "Generic (PLEG): container finished" podID="9893d37e-1139-4f8c-974f-d82d38bb4014" containerID="40b2e06d1d64351ea426a7113320e31888648e19e9eb32e14af9a80b1f0a4b5c" exitCode=0 Jan 26 14:50:20 crc kubenswrapper[4823]: I0126 14:50:20.661952 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c20242eb-5d18-4aed-8862-4d000031d3e9","Type":"ContainerStarted","Data":"45081858ec2c37d5567586229995da2bad3d3caf363567dbdd0058a91bb2b16e"} Jan 26 14:50:20 crc kubenswrapper[4823]: I0126 14:50:20.667479 4823 generic.go:334] "Generic (PLEG): container finished" podID="3efb7df4-2e94-4c83-a793-0fc25d69140e" containerID="143aa343ed1d4c2a3acea6d554228e7c338fa875de7979e1b4b871e9a6c17eef" exitCode=0 Jan 26 14:50:20 crc kubenswrapper[4823]: I0126 14:50:20.667561 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m282g" event={"ID":"3efb7df4-2e94-4c83-a793-0fc25d69140e","Type":"ContainerDied","Data":"143aa343ed1d4c2a3acea6d554228e7c338fa875de7979e1b4b871e9a6c17eef"} Jan 26 14:50:20 crc kubenswrapper[4823]: I0126 14:50:20.671088 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4","Type":"ContainerStarted","Data":"ad04a7df0ba51a358e1333d5f9f93a312b0dc43190981f13e2f961cd1c506f0c"} Jan 26 14:50:20 crc kubenswrapper[4823]: I0126 14:50:20.671562 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:50:20 crc kubenswrapper[4823]: I0126 14:50:20.671618 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:50:20 crc kubenswrapper[4823]: I0126 14:50:20.731073 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=15.731038979000001 podStartE2EDuration="15.731038979s" podCreationTimestamp="2026-01-26 14:50:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:50:20.729084636 +0000 UTC m=+217.414547781" watchObservedRunningTime="2026-01-26 14:50:20.731038979 +0000 UTC m=+217.416502084" Jan 26 14:50:22 crc kubenswrapper[4823]: I0126 14:50:22.534218 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:50:22 crc kubenswrapper[4823]: I0126 14:50:22.534265 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:50:22 crc kubenswrapper[4823]: I0126 14:50:22.534309 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:50:22 crc kubenswrapper[4823]: I0126 14:50:22.534338 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:50:22 crc kubenswrapper[4823]: I0126 14:50:22.684579 4823 generic.go:334] "Generic (PLEG): container finished" podID="aec39c1c-744d-4fc8-b844-fd3bbfa1acc4" containerID="ad04a7df0ba51a358e1333d5f9f93a312b0dc43190981f13e2f961cd1c506f0c" exitCode=0 Jan 26 14:50:22 crc kubenswrapper[4823]: I0126 14:50:22.684655 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4","Type":"ContainerDied","Data":"ad04a7df0ba51a358e1333d5f9f93a312b0dc43190981f13e2f961cd1c506f0c"} Jan 26 14:50:22 crc kubenswrapper[4823]: I0126 14:50:22.702226 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=12.702202125 podStartE2EDuration="12.702202125s" podCreationTimestamp="2026-01-26 14:50:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:50:20.754978878 +0000 UTC m=+217.440441993" watchObservedRunningTime="2026-01-26 14:50:22.702202125 +0000 UTC m=+219.387665230" Jan 26 14:50:24 crc kubenswrapper[4823]: I0126 14:50:24.068420 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:50:24 crc kubenswrapper[4823]: I0126 14:50:24.220484 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aec39c1c-744d-4fc8-b844-fd3bbfa1acc4-kube-api-access\") pod \"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4\" (UID: \"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4\") " Jan 26 14:50:24 crc kubenswrapper[4823]: I0126 14:50:24.220598 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aec39c1c-744d-4fc8-b844-fd3bbfa1acc4-kubelet-dir\") pod \"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4\" (UID: \"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4\") " Jan 26 14:50:24 crc kubenswrapper[4823]: I0126 14:50:24.220903 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aec39c1c-744d-4fc8-b844-fd3bbfa1acc4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aec39c1c-744d-4fc8-b844-fd3bbfa1acc4" (UID: "aec39c1c-744d-4fc8-b844-fd3bbfa1acc4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:50:24 crc kubenswrapper[4823]: I0126 14:50:24.228550 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aec39c1c-744d-4fc8-b844-fd3bbfa1acc4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aec39c1c-744d-4fc8-b844-fd3bbfa1acc4" (UID: "aec39c1c-744d-4fc8-b844-fd3bbfa1acc4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:50:24 crc kubenswrapper[4823]: I0126 14:50:24.322153 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aec39c1c-744d-4fc8-b844-fd3bbfa1acc4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:24 crc kubenswrapper[4823]: I0126 14:50:24.322198 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aec39c1c-744d-4fc8-b844-fd3bbfa1acc4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:24 crc kubenswrapper[4823]: I0126 14:50:24.697270 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"aec39c1c-744d-4fc8-b844-fd3bbfa1acc4","Type":"ContainerDied","Data":"ed6e9118f87980b4d12651db915b4931fae98ddb28f537c708ed7a17e6c08893"} Jan 26 14:50:24 crc kubenswrapper[4823]: I0126 14:50:24.697308 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed6e9118f87980b4d12651db915b4931fae98ddb28f537c708ed7a17e6c08893" Jan 26 14:50:24 crc kubenswrapper[4823]: I0126 14:50:24.697339 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 26 14:50:28 crc kubenswrapper[4823]: I0126 14:50:28.720168 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m282g" event={"ID":"3efb7df4-2e94-4c83-a793-0fc25d69140e","Type":"ContainerStarted","Data":"e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a"} Jan 26 14:50:29 crc kubenswrapper[4823]: I0126 14:50:29.792584 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m282g" podStartSLOduration=9.908800424 podStartE2EDuration="1m3.792564645s" podCreationTimestamp="2026-01-26 14:49:26 +0000 UTC" firstStartedPulling="2026-01-26 14:49:32.90294908 +0000 UTC m=+169.588412175" lastFinishedPulling="2026-01-26 14:50:26.786713291 +0000 UTC m=+223.472176396" observedRunningTime="2026-01-26 14:50:29.788137988 +0000 UTC m=+226.473601093" watchObservedRunningTime="2026-01-26 14:50:29.792564645 +0000 UTC m=+226.478027750" Jan 26 14:50:30 crc kubenswrapper[4823]: I0126 14:50:30.735306 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k2h6" event={"ID":"9893d37e-1139-4f8c-974f-d82d38bb4014","Type":"ContainerStarted","Data":"042f8d92952c35870271918f6d23639c9ead115ddb4db5eed2deecd2f5aba660"} Jan 26 14:50:31 crc kubenswrapper[4823]: I0126 14:50:31.582914 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8k2h6" podStartSLOduration=7.96100047 podStartE2EDuration="1m4.582891967s" podCreationTimestamp="2026-01-26 14:49:27 +0000 UTC" firstStartedPulling="2026-01-26 14:49:32.903502315 +0000 UTC m=+169.588965420" lastFinishedPulling="2026-01-26 14:50:29.525393812 +0000 UTC m=+226.210856917" observedRunningTime="2026-01-26 14:50:30.762889442 +0000 UTC m=+227.448352547" watchObservedRunningTime="2026-01-26 14:50:31.582891967 +0000 UTC m=+228.268355062" Jan 26 14:50:32 crc kubenswrapper[4823]: I0126 14:50:32.534482 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:50:32 crc kubenswrapper[4823]: I0126 14:50:32.534976 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:50:32 crc kubenswrapper[4823]: I0126 14:50:32.534554 4823 patch_prober.go:28] interesting pod/downloads-7954f5f757-5b7zm container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 26 14:50:32 crc kubenswrapper[4823]: I0126 14:50:32.535278 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5b7zm" podUID="4609bcb4-b5ef-43fa-85be-2d897f635951" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 26 14:50:34 crc kubenswrapper[4823]: I0126 14:50:34.508688 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:50:34 crc kubenswrapper[4823]: I0126 14:50:34.509043 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:50:34 crc kubenswrapper[4823]: I0126 14:50:34.509099 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:50:34 crc kubenswrapper[4823]: I0126 14:50:34.509689 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:50:34 crc kubenswrapper[4823]: I0126 14:50:34.509742 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e" gracePeriod=600 Jan 26 14:50:34 crc kubenswrapper[4823]: I0126 14:50:34.757974 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcg28" event={"ID":"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8","Type":"ContainerStarted","Data":"55797f0fb43bd2af5b5e51420cdbb19925ddf7d2faa45ae7b2b57eb73a6030e9"} Jan 26 14:50:34 crc kubenswrapper[4823]: I0126 14:50:34.760895 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx77c" event={"ID":"68042e2f-4e4f-4953-bcdf-f5fa08e199de","Type":"ContainerStarted","Data":"eb821f7cd7f7853af4fc260b09de0180ceafd888e3ff45aedd5285cce6e2d796"} Jan 26 14:50:34 crc kubenswrapper[4823]: I0126 14:50:34.769441 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psj4l" event={"ID":"66da9ec1-7863-4edb-8204-e0ea1812c556","Type":"ContainerStarted","Data":"eab54a02515f6d9287e27e1cbcd8e7608cdce3635db202a6bb0a4594d5958224"} Jan 26 14:50:34 crc kubenswrapper[4823]: I0126 14:50:34.771198 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6bkg" event={"ID":"6cc17803-10bb-4c3c-b89f-4ecb574c2092","Type":"ContainerStarted","Data":"8aa89d6d73b365c23ae7133f8955295be1270d791be99f994f204d6242644189"} Jan 26 14:50:36 crc kubenswrapper[4823]: I0126 14:50:36.945226 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plnn5" event={"ID":"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8","Type":"ContainerStarted","Data":"6aeaff976246703bded7d3117a4239cd95cb315a38cb3c8b567e4ff4b222f742"} Jan 26 14:50:36 crc kubenswrapper[4823]: I0126 14:50:36.948694 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jswn" event={"ID":"0a7642fa-63ff-41bb-950e-b0d1badff9fe","Type":"ContainerStarted","Data":"a6556acdeaf6d986fdcae4669827fa76313e043a660cda174bc8057e96d0f373"} Jan 26 14:50:36 crc kubenswrapper[4823]: I0126 14:50:36.950815 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e" exitCode=0 Jan 26 14:50:36 crc kubenswrapper[4823]: I0126 14:50:36.950918 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e"} Jan 26 14:50:37 crc kubenswrapper[4823]: I0126 14:50:37.953891 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:50:37 crc kubenswrapper[4823]: I0126 14:50:37.956614 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:50:37 crc kubenswrapper[4823]: I0126 14:50:37.985579 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"c1ac15f349ba1ced1b4d92c4c521df0c2d2acf5310bb08adcb7f5c409967c6a5"} Jan 26 14:50:37 crc kubenswrapper[4823]: I0126 14:50:37.990968 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:50:37 crc kubenswrapper[4823]: I0126 14:50:37.991124 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:50:37 crc kubenswrapper[4823]: I0126 14:50:37.992995 4823 generic.go:334] "Generic (PLEG): container finished" podID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" containerID="55797f0fb43bd2af5b5e51420cdbb19925ddf7d2faa45ae7b2b57eb73a6030e9" exitCode=0 Jan 26 14:50:37 crc kubenswrapper[4823]: I0126 14:50:37.993122 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcg28" event={"ID":"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8","Type":"ContainerDied","Data":"55797f0fb43bd2af5b5e51420cdbb19925ddf7d2faa45ae7b2b57eb73a6030e9"} Jan 26 14:50:38 crc kubenswrapper[4823]: I0126 14:50:38.016509 4823 generic.go:334] "Generic (PLEG): container finished" podID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" containerID="eb821f7cd7f7853af4fc260b09de0180ceafd888e3ff45aedd5285cce6e2d796" exitCode=0 Jan 26 14:50:38 crc kubenswrapper[4823]: I0126 14:50:38.016561 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx77c" event={"ID":"68042e2f-4e4f-4953-bcdf-f5fa08e199de","Type":"ContainerDied","Data":"eb821f7cd7f7853af4fc260b09de0180ceafd888e3ff45aedd5285cce6e2d796"} Jan 26 14:50:38 crc kubenswrapper[4823]: I0126 14:50:38.304257 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:50:38 crc kubenswrapper[4823]: I0126 14:50:38.314737 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:50:39 crc kubenswrapper[4823]: I0126 14:50:39.035926 4823 generic.go:334] "Generic (PLEG): container finished" podID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" containerID="a6556acdeaf6d986fdcae4669827fa76313e043a660cda174bc8057e96d0f373" exitCode=0 Jan 26 14:50:39 crc kubenswrapper[4823]: I0126 14:50:39.036129 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jswn" event={"ID":"0a7642fa-63ff-41bb-950e-b0d1badff9fe","Type":"ContainerDied","Data":"a6556acdeaf6d986fdcae4669827fa76313e043a660cda174bc8057e96d0f373"} Jan 26 14:50:39 crc kubenswrapper[4823]: I0126 14:50:39.212516 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:50:39 crc kubenswrapper[4823]: I0126 14:50:39.213045 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:50:40 crc kubenswrapper[4823]: I0126 14:50:40.043932 4823 generic.go:334] "Generic (PLEG): container finished" podID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" containerID="6aeaff976246703bded7d3117a4239cd95cb315a38cb3c8b567e4ff4b222f742" exitCode=0 Jan 26 14:50:40 crc kubenswrapper[4823]: I0126 14:50:40.044012 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plnn5" event={"ID":"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8","Type":"ContainerDied","Data":"6aeaff976246703bded7d3117a4239cd95cb315a38cb3c8b567e4ff4b222f742"} Jan 26 14:50:41 crc kubenswrapper[4823]: I0126 14:50:41.092779 4823 generic.go:334] "Generic (PLEG): container finished" podID="66da9ec1-7863-4edb-8204-e0ea1812c556" containerID="eab54a02515f6d9287e27e1cbcd8e7608cdce3635db202a6bb0a4594d5958224" exitCode=0 Jan 26 14:50:41 crc kubenswrapper[4823]: I0126 14:50:41.093702 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psj4l" event={"ID":"66da9ec1-7863-4edb-8204-e0ea1812c556","Type":"ContainerDied","Data":"eab54a02515f6d9287e27e1cbcd8e7608cdce3635db202a6bb0a4594d5958224"} Jan 26 14:50:41 crc kubenswrapper[4823]: I0126 14:50:41.600303 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k2h6"] Jan 26 14:50:41 crc kubenswrapper[4823]: I0126 14:50:41.600578 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8k2h6" podUID="9893d37e-1139-4f8c-974f-d82d38bb4014" containerName="registry-server" containerID="cri-o://042f8d92952c35870271918f6d23639c9ead115ddb4db5eed2deecd2f5aba660" gracePeriod=2 Jan 26 14:50:42 crc kubenswrapper[4823]: I0126 14:50:42.099495 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx77c" event={"ID":"68042e2f-4e4f-4953-bcdf-f5fa08e199de","Type":"ContainerStarted","Data":"c5ec2fd1bd69f5fcbc56f550d5b86427dca1832a3f25d8ffcb33f673bb4dfca6"} Jan 26 14:50:42 crc kubenswrapper[4823]: I0126 14:50:42.101147 4823 generic.go:334] "Generic (PLEG): container finished" podID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" containerID="8aa89d6d73b365c23ae7133f8955295be1270d791be99f994f204d6242644189" exitCode=0 Jan 26 14:50:42 crc kubenswrapper[4823]: I0126 14:50:42.101176 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6bkg" event={"ID":"6cc17803-10bb-4c3c-b89f-4ecb574c2092","Type":"ContainerDied","Data":"8aa89d6d73b365c23ae7133f8955295be1270d791be99f994f204d6242644189"} Jan 26 14:50:42 crc kubenswrapper[4823]: I0126 14:50:42.545007 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5b7zm" Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.109125 4823 generic.go:334] "Generic (PLEG): container finished" podID="9893d37e-1139-4f8c-974f-d82d38bb4014" containerID="042f8d92952c35870271918f6d23639c9ead115ddb4db5eed2deecd2f5aba660" exitCode=0 Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.109558 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k2h6" event={"ID":"9893d37e-1139-4f8c-974f-d82d38bb4014","Type":"ContainerDied","Data":"042f8d92952c35870271918f6d23639c9ead115ddb4db5eed2deecd2f5aba660"} Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.128464 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wx77c" podStartSLOduration=9.288286275 podStartE2EDuration="1m18.128442526s" podCreationTimestamp="2026-01-26 14:49:25 +0000 UTC" firstStartedPulling="2026-01-26 14:49:31.855521823 +0000 UTC m=+168.540984928" lastFinishedPulling="2026-01-26 14:50:40.695678074 +0000 UTC m=+237.381141179" observedRunningTime="2026-01-26 14:50:43.127394937 +0000 UTC m=+239.812858062" watchObservedRunningTime="2026-01-26 14:50:43.128442526 +0000 UTC m=+239.813905631" Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.207383 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.302632 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mwkh\" (UniqueName: \"kubernetes.io/projected/9893d37e-1139-4f8c-974f-d82d38bb4014-kube-api-access-2mwkh\") pod \"9893d37e-1139-4f8c-974f-d82d38bb4014\" (UID: \"9893d37e-1139-4f8c-974f-d82d38bb4014\") " Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.304515 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9893d37e-1139-4f8c-974f-d82d38bb4014-utilities\") pod \"9893d37e-1139-4f8c-974f-d82d38bb4014\" (UID: \"9893d37e-1139-4f8c-974f-d82d38bb4014\") " Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.304589 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9893d37e-1139-4f8c-974f-d82d38bb4014-catalog-content\") pod \"9893d37e-1139-4f8c-974f-d82d38bb4014\" (UID: \"9893d37e-1139-4f8c-974f-d82d38bb4014\") " Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.306172 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9893d37e-1139-4f8c-974f-d82d38bb4014-utilities" (OuterVolumeSpecName: "utilities") pod "9893d37e-1139-4f8c-974f-d82d38bb4014" (UID: "9893d37e-1139-4f8c-974f-d82d38bb4014"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.319145 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9893d37e-1139-4f8c-974f-d82d38bb4014-kube-api-access-2mwkh" (OuterVolumeSpecName: "kube-api-access-2mwkh") pod "9893d37e-1139-4f8c-974f-d82d38bb4014" (UID: "9893d37e-1139-4f8c-974f-d82d38bb4014"). InnerVolumeSpecName "kube-api-access-2mwkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.328187 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9893d37e-1139-4f8c-974f-d82d38bb4014-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9893d37e-1139-4f8c-974f-d82d38bb4014" (UID: "9893d37e-1139-4f8c-974f-d82d38bb4014"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.405854 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mwkh\" (UniqueName: \"kubernetes.io/projected/9893d37e-1139-4f8c-974f-d82d38bb4014-kube-api-access-2mwkh\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.405898 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9893d37e-1139-4f8c-974f-d82d38bb4014-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:43 crc kubenswrapper[4823]: I0126 14:50:43.405908 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9893d37e-1139-4f8c-974f-d82d38bb4014-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:44 crc kubenswrapper[4823]: I0126 14:50:44.120008 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8k2h6" event={"ID":"9893d37e-1139-4f8c-974f-d82d38bb4014","Type":"ContainerDied","Data":"80d737b5f9dab8e339445db0338915f9702dec7010f13f6db8c13983e879df97"} Jan 26 14:50:44 crc kubenswrapper[4823]: I0126 14:50:44.120173 4823 scope.go:117] "RemoveContainer" containerID="042f8d92952c35870271918f6d23639c9ead115ddb4db5eed2deecd2f5aba660" Jan 26 14:50:44 crc kubenswrapper[4823]: I0126 14:50:44.120816 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8k2h6" Jan 26 14:50:44 crc kubenswrapper[4823]: I0126 14:50:44.148237 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k2h6"] Jan 26 14:50:44 crc kubenswrapper[4823]: I0126 14:50:44.155060 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8k2h6"] Jan 26 14:50:44 crc kubenswrapper[4823]: I0126 14:50:44.170782 4823 scope.go:117] "RemoveContainer" containerID="40b2e06d1d64351ea426a7113320e31888648e19e9eb32e14af9a80b1f0a4b5c" Jan 26 14:50:44 crc kubenswrapper[4823]: I0126 14:50:44.195084 4823 scope.go:117] "RemoveContainer" containerID="5ae1462da52b15580af40143b17af3ab876765f41928002f93a6b4211cc947da" Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.132142 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jswn" event={"ID":"0a7642fa-63ff-41bb-950e-b0d1badff9fe","Type":"ContainerStarted","Data":"38a848e617ef64ea19440558db5265c2d48b9571cf04741e835b6cc38d80edda"} Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.138232 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plnn5" event={"ID":"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8","Type":"ContainerStarted","Data":"5f153700c84f92c32537d72942efb6b8f32bfafab46bc22dca9ff27f24db09a7"} Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.140806 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6bkg" event={"ID":"6cc17803-10bb-4c3c-b89f-4ecb574c2092","Type":"ContainerStarted","Data":"6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d"} Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.143349 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcg28" event={"ID":"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8","Type":"ContainerStarted","Data":"98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802"} Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.148560 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psj4l" event={"ID":"66da9ec1-7863-4edb-8204-e0ea1812c556","Type":"ContainerStarted","Data":"2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e"} Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.162619 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6jswn" podStartSLOduration=9.843102119 podStartE2EDuration="1m21.162581747s" podCreationTimestamp="2026-01-26 14:49:24 +0000 UTC" firstStartedPulling="2026-01-26 14:49:32.903257718 +0000 UTC m=+169.588720823" lastFinishedPulling="2026-01-26 14:50:44.222737346 +0000 UTC m=+240.908200451" observedRunningTime="2026-01-26 14:50:45.15748944 +0000 UTC m=+241.842952555" watchObservedRunningTime="2026-01-26 14:50:45.162581747 +0000 UTC m=+241.848044862" Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.181350 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xcg28" podStartSLOduration=8.953186538 podStartE2EDuration="1m20.181318118s" podCreationTimestamp="2026-01-26 14:49:25 +0000 UTC" firstStartedPulling="2026-01-26 14:49:32.943169401 +0000 UTC m=+169.628632506" lastFinishedPulling="2026-01-26 14:50:44.171300981 +0000 UTC m=+240.856764086" observedRunningTime="2026-01-26 14:50:45.179063678 +0000 UTC m=+241.864526793" watchObservedRunningTime="2026-01-26 14:50:45.181318118 +0000 UTC m=+241.866781233" Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.215833 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-psj4l" podStartSLOduration=6.846810717 podStartE2EDuration="1m17.2158025s" podCreationTimestamp="2026-01-26 14:49:28 +0000 UTC" firstStartedPulling="2026-01-26 14:49:34.004095303 +0000 UTC m=+170.689558408" lastFinishedPulling="2026-01-26 14:50:44.373087086 +0000 UTC m=+241.058550191" observedRunningTime="2026-01-26 14:50:45.210705743 +0000 UTC m=+241.896168858" watchObservedRunningTime="2026-01-26 14:50:45.2158025 +0000 UTC m=+241.901265605" Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.254758 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s6bkg" podStartSLOduration=6.862250795 podStartE2EDuration="1m17.254725621s" podCreationTimestamp="2026-01-26 14:49:28 +0000 UTC" firstStartedPulling="2026-01-26 14:49:33.990126325 +0000 UTC m=+170.675589420" lastFinishedPulling="2026-01-26 14:50:44.382601141 +0000 UTC m=+241.068064246" observedRunningTime="2026-01-26 14:50:45.252969083 +0000 UTC m=+241.938432208" watchObservedRunningTime="2026-01-26 14:50:45.254725621 +0000 UTC m=+241.940188726" Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.274123 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-plnn5" podStartSLOduration=7.500671561 podStartE2EDuration="1m20.274097799s" podCreationTimestamp="2026-01-26 14:49:25 +0000 UTC" firstStartedPulling="2026-01-26 14:49:31.507886914 +0000 UTC m=+168.193350019" lastFinishedPulling="2026-01-26 14:50:44.281313152 +0000 UTC m=+240.966776257" observedRunningTime="2026-01-26 14:50:45.2715436 +0000 UTC m=+241.957006705" watchObservedRunningTime="2026-01-26 14:50:45.274097799 +0000 UTC m=+241.959560904" Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.575308 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9893d37e-1139-4f8c-974f-d82d38bb4014" path="/var/lib/kubelet/pods/9893d37e-1139-4f8c-974f-d82d38bb4014/volumes" Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.611604 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.611699 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.620756 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:50:45 crc kubenswrapper[4823]: I0126 14:50:45.620851 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:50:46 crc kubenswrapper[4823]: I0126 14:50:46.147934 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:50:46 crc kubenswrapper[4823]: I0126 14:50:46.148026 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:50:46 crc kubenswrapper[4823]: I0126 14:50:46.202419 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:50:46 crc kubenswrapper[4823]: I0126 14:50:46.394573 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:50:46 crc kubenswrapper[4823]: I0126 14:50:46.394628 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:50:46 crc kubenswrapper[4823]: I0126 14:50:46.665226 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6jswn" podUID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" containerName="registry-server" probeResult="failure" output=< Jan 26 14:50:46 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 14:50:46 crc kubenswrapper[4823]: > Jan 26 14:50:46 crc kubenswrapper[4823]: I0126 14:50:46.667055 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-plnn5" podUID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" containerName="registry-server" probeResult="failure" output=< Jan 26 14:50:46 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 14:50:46 crc kubenswrapper[4823]: > Jan 26 14:50:47 crc kubenswrapper[4823]: I0126 14:50:47.232910 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:50:47 crc kubenswrapper[4823]: I0126 14:50:47.481765 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xcg28" podUID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" containerName="registry-server" probeResult="failure" output=< Jan 26 14:50:47 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 14:50:47 crc kubenswrapper[4823]: > Jan 26 14:50:48 crc kubenswrapper[4823]: I0126 14:50:48.829430 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:50:48 crc kubenswrapper[4823]: I0126 14:50:48.831543 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:50:49 crc kubenswrapper[4823]: I0126 14:50:49.128630 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:50:49 crc kubenswrapper[4823]: I0126 14:50:49.128688 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:50:49 crc kubenswrapper[4823]: I0126 14:50:49.870674 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s6bkg" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" containerName="registry-server" probeResult="failure" output=< Jan 26 14:50:49 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 14:50:49 crc kubenswrapper[4823]: > Jan 26 14:50:50 crc kubenswrapper[4823]: I0126 14:50:50.180535 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-psj4l" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" containerName="registry-server" probeResult="failure" output=< Jan 26 14:50:50 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 14:50:50 crc kubenswrapper[4823]: > Jan 26 14:50:50 crc kubenswrapper[4823]: I0126 14:50:50.211143 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wx77c"] Jan 26 14:50:50 crc kubenswrapper[4823]: I0126 14:50:50.212164 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wx77c" podUID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" containerName="registry-server" containerID="cri-o://c5ec2fd1bd69f5fcbc56f550d5b86427dca1832a3f25d8ffcb33f673bb4dfca6" gracePeriod=2 Jan 26 14:50:52 crc kubenswrapper[4823]: I0126 14:50:52.044621 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6vd9x"] Jan 26 14:50:52 crc kubenswrapper[4823]: I0126 14:50:52.193948 4823 generic.go:334] "Generic (PLEG): container finished" podID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" containerID="c5ec2fd1bd69f5fcbc56f550d5b86427dca1832a3f25d8ffcb33f673bb4dfca6" exitCode=0 Jan 26 14:50:52 crc kubenswrapper[4823]: I0126 14:50:52.194039 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx77c" event={"ID":"68042e2f-4e4f-4953-bcdf-f5fa08e199de","Type":"ContainerDied","Data":"c5ec2fd1bd69f5fcbc56f550d5b86427dca1832a3f25d8ffcb33f673bb4dfca6"} Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.123020 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.153355 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxllb\" (UniqueName: \"kubernetes.io/projected/68042e2f-4e4f-4953-bcdf-f5fa08e199de-kube-api-access-qxllb\") pod \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\" (UID: \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\") " Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.153564 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68042e2f-4e4f-4953-bcdf-f5fa08e199de-utilities\") pod \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\" (UID: \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\") " Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.153665 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68042e2f-4e4f-4953-bcdf-f5fa08e199de-catalog-content\") pod \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\" (UID: \"68042e2f-4e4f-4953-bcdf-f5fa08e199de\") " Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.154698 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68042e2f-4e4f-4953-bcdf-f5fa08e199de-utilities" (OuterVolumeSpecName: "utilities") pod "68042e2f-4e4f-4953-bcdf-f5fa08e199de" (UID: "68042e2f-4e4f-4953-bcdf-f5fa08e199de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.161216 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68042e2f-4e4f-4953-bcdf-f5fa08e199de-kube-api-access-qxllb" (OuterVolumeSpecName: "kube-api-access-qxllb") pod "68042e2f-4e4f-4953-bcdf-f5fa08e199de" (UID: "68042e2f-4e4f-4953-bcdf-f5fa08e199de"). InnerVolumeSpecName "kube-api-access-qxllb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.207352 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68042e2f-4e4f-4953-bcdf-f5fa08e199de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68042e2f-4e4f-4953-bcdf-f5fa08e199de" (UID: "68042e2f-4e4f-4953-bcdf-f5fa08e199de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.210443 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wx77c" event={"ID":"68042e2f-4e4f-4953-bcdf-f5fa08e199de","Type":"ContainerDied","Data":"d2bcafccd532de2da0561e93e4cef039823bb85ce04c3e124b8bbf85727c964c"} Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.210538 4823 scope.go:117] "RemoveContainer" containerID="c5ec2fd1bd69f5fcbc56f550d5b86427dca1832a3f25d8ffcb33f673bb4dfca6" Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.210788 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wx77c" Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.249057 4823 scope.go:117] "RemoveContainer" containerID="eb821f7cd7f7853af4fc260b09de0180ceafd888e3ff45aedd5285cce6e2d796" Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.249660 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wx77c"] Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.256466 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68042e2f-4e4f-4953-bcdf-f5fa08e199de-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.256516 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68042e2f-4e4f-4953-bcdf-f5fa08e199de-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.256532 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxllb\" (UniqueName: \"kubernetes.io/projected/68042e2f-4e4f-4953-bcdf-f5fa08e199de-kube-api-access-qxllb\") on node \"crc\" DevicePath \"\"" Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.258153 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wx77c"] Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.287537 4823 scope.go:117] "RemoveContainer" containerID="e3a6ab1b27a17a1aa2f26fb0a1a027b0af83c107cdd22b3cc78b95dc78bec766" Jan 26 14:50:53 crc kubenswrapper[4823]: I0126 14:50:53.569884 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" path="/var/lib/kubelet/pods/68042e2f-4e4f-4953-bcdf-f5fa08e199de/volumes" Jan 26 14:50:55 crc kubenswrapper[4823]: I0126 14:50:55.681645 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:50:55 crc kubenswrapper[4823]: I0126 14:50:55.685468 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:50:55 crc kubenswrapper[4823]: I0126 14:50:55.732912 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:50:55 crc kubenswrapper[4823]: I0126 14:50:55.753263 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:50:56 crc kubenswrapper[4823]: I0126 14:50:56.450937 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:50:56 crc kubenswrapper[4823]: I0126 14:50:56.496543 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.901710 4823 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.902769 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9893d37e-1139-4f8c-974f-d82d38bb4014" containerName="extract-utilities" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.902790 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9893d37e-1139-4f8c-974f-d82d38bb4014" containerName="extract-utilities" Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.902800 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aec39c1c-744d-4fc8-b844-fd3bbfa1acc4" containerName="pruner" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.902806 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="aec39c1c-744d-4fc8-b844-fd3bbfa1acc4" containerName="pruner" Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.902814 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" containerName="extract-content" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.902823 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" containerName="extract-content" Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.902859 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9893d37e-1139-4f8c-974f-d82d38bb4014" containerName="extract-content" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.902865 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9893d37e-1139-4f8c-974f-d82d38bb4014" containerName="extract-content" Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.902876 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" containerName="registry-server" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.902881 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" containerName="registry-server" Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.902892 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9893d37e-1139-4f8c-974f-d82d38bb4014" containerName="registry-server" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.902898 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9893d37e-1139-4f8c-974f-d82d38bb4014" containerName="registry-server" Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.902926 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" containerName="extract-utilities" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.902935 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" containerName="extract-utilities" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.903135 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9893d37e-1139-4f8c-974f-d82d38bb4014" containerName="registry-server" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.903170 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="68042e2f-4e4f-4953-bcdf-f5fa08e199de" containerName="registry-server" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.903179 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="aec39c1c-744d-4fc8-b844-fd3bbfa1acc4" containerName="pruner" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.904323 4823 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.904571 4823 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.904520 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.905165 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3" gracePeriod=15 Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.905230 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758" gracePeriod=15 Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.905196 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc" gracePeriod=15 Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.905258 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425" gracePeriod=15 Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.905817 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.909195 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.909231 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.909240 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.909265 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.909274 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.909306 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.909316 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.909350 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.909379 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 14:50:57 crc kubenswrapper[4823]: E0126 14:50:57.909390 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.909399 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.909702 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.909739 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.909762 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.909773 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.909784 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.905783 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee" gracePeriod=15 Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.923329 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.923398 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.923486 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.923569 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.923622 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.923646 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.923697 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.923728 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:57 crc kubenswrapper[4823]: I0126 14:50:57.953190 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.024568 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.024617 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.024648 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.024666 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.024691 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.024725 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.024748 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.024796 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.024951 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.025010 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.025039 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.025055 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.025096 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.025171 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.025068 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.025228 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.248397 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:50:58 crc kubenswrapper[4823]: E0126 14:50:58.283407 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e4f72d89bed0b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 14:50:58.282147083 +0000 UTC m=+254.967610188,LastTimestamp:2026-01-26 14:50:58.282147083 +0000 UTC m=+254.967610188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.874471 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.876800 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.877454 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.877832 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.918265 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.919243 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.920071 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.920886 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.984564 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 26 14:50:58 crc kubenswrapper[4823]: I0126 14:50:58.984702 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.173817 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.174813 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.175324 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.175685 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.176159 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.235138 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.235948 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.236408 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.236980 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.237218 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.250974 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7e58804bb85c4bab0c2d8e06300590bb9236d8c4a328dc04f271dca5af54064f"} Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.252864 4823 generic.go:334] "Generic (PLEG): container finished" podID="c20242eb-5d18-4aed-8862-4d000031d3e9" containerID="45081858ec2c37d5567586229995da2bad3d3caf363567dbdd0058a91bb2b16e" exitCode=0 Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.252919 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c20242eb-5d18-4aed-8862-4d000031d3e9","Type":"ContainerDied","Data":"45081858ec2c37d5567586229995da2bad3d3caf363567dbdd0058a91bb2b16e"} Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.253717 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.253937 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.254429 4823 status_manager.go:851] "Failed to get status for pod" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.254639 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.254833 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.257130 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.257997 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc" exitCode=0 Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.258015 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758" exitCode=0 Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.258026 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425" exitCode=0 Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.258033 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee" exitCode=2 Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.476731 4823 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 26 14:50:59 crc kubenswrapper[4823]: I0126 14:50:59.476834 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.264751 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"447b3eba5cd93a67fb056b1f43f8e3bba881cfc5276826055b4a13ef238dea8a"} Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.266037 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.266354 4823 status_manager.go:851] "Failed to get status for pod" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.266697 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.266966 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.559443 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.560855 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.561512 4823 status_manager.go:851] "Failed to get status for pod" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.562077 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.562619 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.674230 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c20242eb-5d18-4aed-8862-4d000031d3e9-var-lock\") pod \"c20242eb-5d18-4aed-8862-4d000031d3e9\" (UID: \"c20242eb-5d18-4aed-8862-4d000031d3e9\") " Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.674309 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c20242eb-5d18-4aed-8862-4d000031d3e9-kubelet-dir\") pod \"c20242eb-5d18-4aed-8862-4d000031d3e9\" (UID: \"c20242eb-5d18-4aed-8862-4d000031d3e9\") " Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.674388 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c20242eb-5d18-4aed-8862-4d000031d3e9-kube-api-access\") pod \"c20242eb-5d18-4aed-8862-4d000031d3e9\" (UID: \"c20242eb-5d18-4aed-8862-4d000031d3e9\") " Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.674455 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c20242eb-5d18-4aed-8862-4d000031d3e9-var-lock" (OuterVolumeSpecName: "var-lock") pod "c20242eb-5d18-4aed-8862-4d000031d3e9" (UID: "c20242eb-5d18-4aed-8862-4d000031d3e9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.674935 4823 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c20242eb-5d18-4aed-8862-4d000031d3e9-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.674455 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c20242eb-5d18-4aed-8862-4d000031d3e9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c20242eb-5d18-4aed-8862-4d000031d3e9" (UID: "c20242eb-5d18-4aed-8862-4d000031d3e9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.682781 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20242eb-5d18-4aed-8862-4d000031d3e9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c20242eb-5d18-4aed-8862-4d000031d3e9" (UID: "c20242eb-5d18-4aed-8862-4d000031d3e9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.776066 4823 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c20242eb-5d18-4aed-8862-4d000031d3e9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.776111 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c20242eb-5d18-4aed-8862-4d000031d3e9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.786904 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.787899 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.788643 4823 status_manager.go:851] "Failed to get status for pod" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.789191 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.789857 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.790419 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.790742 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.877078 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.877734 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.877878 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.877189 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.877804 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.877953 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.878538 4823 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.878594 4823 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:00 crc kubenswrapper[4823]: I0126 14:51:00.878607 4823 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.277518 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.278323 4823 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3" exitCode=0 Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.278523 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.278668 4823 scope.go:117] "RemoveContainer" containerID="bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.282209 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.282217 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"c20242eb-5d18-4aed-8862-4d000031d3e9","Type":"ContainerDied","Data":"2c72c78e35aa9786a2e452edc07bd360c763c94c42c88dfdb3946ca4a3d702da"} Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.282403 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c72c78e35aa9786a2e452edc07bd360c763c94c42c88dfdb3946ca4a3d702da" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.298456 4823 status_manager.go:851] "Failed to get status for pod" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.299139 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.299694 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.300123 4823 scope.go:117] "RemoveContainer" containerID="85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.300323 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.300636 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.311636 4823 status_manager.go:851] "Failed to get status for pod" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.312090 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.312324 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.312556 4823 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.312860 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.328001 4823 scope.go:117] "RemoveContainer" containerID="ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.348270 4823 scope.go:117] "RemoveContainer" containerID="d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.364698 4823 scope.go:117] "RemoveContainer" containerID="e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3" Jan 26 14:51:01 crc kubenswrapper[4823]: E0126 14:51:01.377804 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e4f72d89bed0b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 14:50:58.282147083 +0000 UTC m=+254.967610188,LastTimestamp:2026-01-26 14:50:58.282147083 +0000 UTC m=+254.967610188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.385059 4823 scope.go:117] "RemoveContainer" containerID="ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.412413 4823 scope.go:117] "RemoveContainer" containerID="bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc" Jan 26 14:51:01 crc kubenswrapper[4823]: E0126 14:51:01.416239 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\": container with ID starting with bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc not found: ID does not exist" containerID="bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.416305 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc"} err="failed to get container status \"bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\": rpc error: code = NotFound desc = could not find container \"bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc\": container with ID starting with bf76ecd91998e4783f5d1c899507d3300213b6c2c6742a61f537a27126bd15cc not found: ID does not exist" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.416347 4823 scope.go:117] "RemoveContainer" containerID="85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758" Jan 26 14:51:01 crc kubenswrapper[4823]: E0126 14:51:01.417050 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\": container with ID starting with 85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758 not found: ID does not exist" containerID="85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.417137 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758"} err="failed to get container status \"85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\": rpc error: code = NotFound desc = could not find container \"85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758\": container with ID starting with 85066b084d59f08f05140c5ba8981cfebe1ef7422719e78cf69adf59f185e758 not found: ID does not exist" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.417188 4823 scope.go:117] "RemoveContainer" containerID="ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425" Jan 26 14:51:01 crc kubenswrapper[4823]: E0126 14:51:01.417683 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\": container with ID starting with ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425 not found: ID does not exist" containerID="ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.417713 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425"} err="failed to get container status \"ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\": rpc error: code = NotFound desc = could not find container \"ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425\": container with ID starting with ba4d5fe475603ada01f9e6a64f9a74f09af29f980dfe45d2c3793ff751762425 not found: ID does not exist" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.417730 4823 scope.go:117] "RemoveContainer" containerID="d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee" Jan 26 14:51:01 crc kubenswrapper[4823]: E0126 14:51:01.418150 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\": container with ID starting with d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee not found: ID does not exist" containerID="d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.418188 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee"} err="failed to get container status \"d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\": rpc error: code = NotFound desc = could not find container \"d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee\": container with ID starting with d82625cfedfacc4cac68acf6c879b1102433de5d31eba747247f3ca9edc171ee not found: ID does not exist" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.418220 4823 scope.go:117] "RemoveContainer" containerID="e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3" Jan 26 14:51:01 crc kubenswrapper[4823]: E0126 14:51:01.418728 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\": container with ID starting with e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3 not found: ID does not exist" containerID="e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.418759 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3"} err="failed to get container status \"e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\": rpc error: code = NotFound desc = could not find container \"e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3\": container with ID starting with e48b7dfb38254a5b8f16495279781277ce83fc7729a855a24726a061de5c5fe3 not found: ID does not exist" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.418778 4823 scope.go:117] "RemoveContainer" containerID="ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f" Jan 26 14:51:01 crc kubenswrapper[4823]: E0126 14:51:01.419113 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\": container with ID starting with ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f not found: ID does not exist" containerID="ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.419201 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f"} err="failed to get container status \"ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\": rpc error: code = NotFound desc = could not find container \"ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f\": container with ID starting with ed3ae1f890305df3f99f28b8c14aa15bcda071f3c50a1506ec358f0cf728af4f not found: ID does not exist" Jan 26 14:51:01 crc kubenswrapper[4823]: I0126 14:51:01.569066 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 26 14:51:03 crc kubenswrapper[4823]: I0126 14:51:03.564548 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:03 crc kubenswrapper[4823]: I0126 14:51:03.566266 4823 status_manager.go:851] "Failed to get status for pod" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:03 crc kubenswrapper[4823]: I0126 14:51:03.566743 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:03 crc kubenswrapper[4823]: I0126 14:51:03.566967 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:07 crc kubenswrapper[4823]: E0126 14:51:07.907863 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:07 crc kubenswrapper[4823]: E0126 14:51:07.909003 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:07 crc kubenswrapper[4823]: E0126 14:51:07.909849 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:07 crc kubenswrapper[4823]: E0126 14:51:07.910214 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:07 crc kubenswrapper[4823]: E0126 14:51:07.910703 4823 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:07 crc kubenswrapper[4823]: I0126 14:51:07.910765 4823 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 26 14:51:07 crc kubenswrapper[4823]: E0126 14:51:07.911190 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="200ms" Jan 26 14:51:08 crc kubenswrapper[4823]: E0126 14:51:08.113479 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="400ms" Jan 26 14:51:08 crc kubenswrapper[4823]: E0126 14:51:08.514054 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="800ms" Jan 26 14:51:09 crc kubenswrapper[4823]: E0126 14:51:09.315921 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="1.6s" Jan 26 14:51:10 crc kubenswrapper[4823]: I0126 14:51:10.850028 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 14:51:10 crc kubenswrapper[4823]: I0126 14:51:10.850800 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 14:51:10 crc kubenswrapper[4823]: E0126 14:51:10.916851 4823 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.106:6443: connect: connection refused" interval="3.2s" Jan 26 14:51:11 crc kubenswrapper[4823]: I0126 14:51:11.366786 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 14:51:11 crc kubenswrapper[4823]: I0126 14:51:11.366868 4823 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c" exitCode=1 Jan 26 14:51:11 crc kubenswrapper[4823]: I0126 14:51:11.366921 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c"} Jan 26 14:51:11 crc kubenswrapper[4823]: I0126 14:51:11.367573 4823 scope.go:117] "RemoveContainer" containerID="e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c" Jan 26 14:51:11 crc kubenswrapper[4823]: I0126 14:51:11.368844 4823 status_manager.go:851] "Failed to get status for pod" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:11 crc kubenswrapper[4823]: I0126 14:51:11.369140 4823 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:11 crc kubenswrapper[4823]: I0126 14:51:11.369448 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:11 crc kubenswrapper[4823]: I0126 14:51:11.371631 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:11 crc kubenswrapper[4823]: I0126 14:51:11.372239 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:11 crc kubenswrapper[4823]: E0126 14:51:11.378751 4823 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.106:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188e4f72d89bed0b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-26 14:50:58.282147083 +0000 UTC m=+254.967610188,LastTimestamp:2026-01-26 14:50:58.282147083 +0000 UTC m=+254.967610188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.376837 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.376961 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e29bce9aef66f54f72569a951f1abbfc674216d29cede2d99455fea7142f54f7"} Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.378231 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.378562 4823 status_manager.go:851] "Failed to get status for pod" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.378970 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.379654 4823 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.379983 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.560043 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.561059 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.561350 4823 status_manager.go:851] "Failed to get status for pod" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.561882 4823 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.562513 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.563439 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.577751 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ef64e7a1-3b41-43fe-90ef-603abc3e6b63" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.577800 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ef64e7a1-3b41-43fe-90ef-603abc3e6b63" Jan 26 14:51:12 crc kubenswrapper[4823]: E0126 14:51:12.578383 4823 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:12 crc kubenswrapper[4823]: I0126 14:51:12.578979 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:12 crc kubenswrapper[4823]: W0126 14:51:12.603209 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-8d643998ffdb5d8757cc9b39ee913c77d8c7d89646a86a1447a93d5ea16892a7 WatchSource:0}: Error finding container 8d643998ffdb5d8757cc9b39ee913c77d8c7d89646a86a1447a93d5ea16892a7: Status 404 returned error can't find the container with id 8d643998ffdb5d8757cc9b39ee913c77d8c7d89646a86a1447a93d5ea16892a7 Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.393752 4823 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="05dd2bdeb644b7dc14102f8176577ece8362d0691d61636353d095b70aab8c77" exitCode=0 Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.393822 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"05dd2bdeb644b7dc14102f8176577ece8362d0691d61636353d095b70aab8c77"} Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.393865 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8d643998ffdb5d8757cc9b39ee913c77d8c7d89646a86a1447a93d5ea16892a7"} Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.394177 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ef64e7a1-3b41-43fe-90ef-603abc3e6b63" Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.394193 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ef64e7a1-3b41-43fe-90ef-603abc3e6b63" Jan 26 14:51:13 crc kubenswrapper[4823]: E0126 14:51:13.394892 4823 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.395530 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.396083 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.396705 4823 status_manager.go:851] "Failed to get status for pod" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.397210 4823 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.397652 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.565756 4823 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.566831 4823 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.567115 4823 status_manager.go:851] "Failed to get status for pod" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.567424 4823 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.567719 4823 status_manager.go:851] "Failed to get status for pod" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" pod="openshift-marketplace/redhat-operators-psj4l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-psj4l\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:13 crc kubenswrapper[4823]: I0126 14:51:13.567907 4823 status_manager.go:851] "Failed to get status for pod" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" pod="openshift-marketplace/redhat-operators-s6bkg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-s6bkg\": dial tcp 38.102.83.106:6443: connect: connection refused" Jan 26 14:51:14 crc kubenswrapper[4823]: I0126 14:51:14.408532 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"95d5de4a7c9a63bdb8ece24225e7f448bb84f51988f3922e59e1530ce767e385"} Jan 26 14:51:14 crc kubenswrapper[4823]: I0126 14:51:14.409012 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"bc2cc87f3e4636f02cfe2cd23caf8074550e7e3fb3a6fc9e9dba72515ff08011"} Jan 26 14:51:14 crc kubenswrapper[4823]: I0126 14:51:14.409032 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"178789a0f030b58394a6f4b0d1c064510a26e1ef439d8f392fee966cc51fcd0e"} Jan 26 14:51:14 crc kubenswrapper[4823]: I0126 14:51:14.455319 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:51:14 crc kubenswrapper[4823]: I0126 14:51:14.455434 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 14:51:14 crc kubenswrapper[4823]: I0126 14:51:14.455522 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 14:51:15 crc kubenswrapper[4823]: I0126 14:51:15.418589 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"89b503890babe3b26d9cdf92b62cf7babfb131c153339b22ff46065100e2095e"} Jan 26 14:51:15 crc kubenswrapper[4823]: I0126 14:51:15.418647 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"80873539b24c6d530ecc205d2c57cc83f1005ca00763a93728ed414a07237a60"} Jan 26 14:51:15 crc kubenswrapper[4823]: I0126 14:51:15.418837 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:15 crc kubenswrapper[4823]: I0126 14:51:15.419010 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ef64e7a1-3b41-43fe-90ef-603abc3e6b63" Jan 26 14:51:15 crc kubenswrapper[4823]: I0126 14:51:15.419059 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ef64e7a1-3b41-43fe-90ef-603abc3e6b63" Jan 26 14:51:16 crc kubenswrapper[4823]: I0126 14:51:16.389875 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:51:17 crc kubenswrapper[4823]: I0126 14:51:17.097573 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" podUID="b942d06c-fac8-4546-98a6-f36d0666d0d4" containerName="oauth-openshift" containerID="cri-o://43d6a40d7311c3cc7e46a1d3d836c442fa978a42ca3f41b62273b10b2d816005" gracePeriod=15 Jan 26 14:51:17 crc kubenswrapper[4823]: I0126 14:51:17.548501 4823 generic.go:334] "Generic (PLEG): container finished" podID="b942d06c-fac8-4546-98a6-f36d0666d0d4" containerID="43d6a40d7311c3cc7e46a1d3d836c442fa978a42ca3f41b62273b10b2d816005" exitCode=0 Jan 26 14:51:17 crc kubenswrapper[4823]: I0126 14:51:17.548628 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" event={"ID":"b942d06c-fac8-4546-98a6-f36d0666d0d4","Type":"ContainerDied","Data":"43d6a40d7311c3cc7e46a1d3d836c442fa978a42ca3f41b62273b10b2d816005"} Jan 26 14:51:17 crc kubenswrapper[4823]: I0126 14:51:17.579963 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:17 crc kubenswrapper[4823]: I0126 14:51:17.580077 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:17 crc kubenswrapper[4823]: I0126 14:51:17.585324 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:17 crc kubenswrapper[4823]: I0126 14:51:17.590778 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:51:18 crc kubenswrapper[4823]: I0126 14:51:18.557250 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" event={"ID":"b942d06c-fac8-4546-98a6-f36d0666d0d4","Type":"ContainerDied","Data":"d75f0bd4f906ec394f8e39190e12f875fdd529ad56b5c2dd3fdc132161542814"} Jan 26 14:51:18 crc kubenswrapper[4823]: I0126 14:51:18.557295 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6vd9x" Jan 26 14:51:18 crc kubenswrapper[4823]: I0126 14:51:18.557804 4823 scope.go:117] "RemoveContainer" containerID="43d6a40d7311c3cc7e46a1d3d836c442fa978a42ca3f41b62273b10b2d816005" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.449280 4823 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.515875 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="428e42be-0ee2-47f7-9722-51202fc4b7b2" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.573060 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ef64e7a1-3b41-43fe-90ef-603abc3e6b63" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.573090 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ef64e7a1-3b41-43fe-90ef-603abc3e6b63" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.624584 4823 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="428e42be-0ee2-47f7-9722-51202fc4b7b2" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715533 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-trusted-ca-bundle\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715590 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-router-certs\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715623 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-ocp-branding-template\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715651 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-cliconfig\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715677 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-serving-cert\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715703 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-service-ca\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715746 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-audit-policies\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715774 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-idp-0-file-data\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715820 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-542g6\" (UniqueName: \"kubernetes.io/projected/b942d06c-fac8-4546-98a6-f36d0666d0d4-kube-api-access-542g6\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715853 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-error\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715895 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-session\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715925 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-provider-selection\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715954 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-login\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.715991 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b942d06c-fac8-4546-98a6-f36d0666d0d4-audit-dir\") pod \"b942d06c-fac8-4546-98a6-f36d0666d0d4\" (UID: \"b942d06c-fac8-4546-98a6-f36d0666d0d4\") " Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.717082 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.717100 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b942d06c-fac8-4546-98a6-f36d0666d0d4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.718810 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.723330 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.723837 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.724518 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.726824 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.728002 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.729176 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.731750 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b942d06c-fac8-4546-98a6-f36d0666d0d4-kube-api-access-542g6" (OuterVolumeSpecName: "kube-api-access-542g6") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "kube-api-access-542g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.733214 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.733639 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.734202 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.735970 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b942d06c-fac8-4546-98a6-f36d0666d0d4" (UID: "b942d06c-fac8-4546-98a6-f36d0666d0d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818163 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818213 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818228 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818241 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818254 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818265 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818278 4823 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b942d06c-fac8-4546-98a6-f36d0666d0d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818293 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818307 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-542g6\" (UniqueName: \"kubernetes.io/projected/b942d06c-fac8-4546-98a6-f36d0666d0d4-kube-api-access-542g6\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818319 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818332 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818351 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818393 4823 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b942d06c-fac8-4546-98a6-f36d0666d0d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:20 crc kubenswrapper[4823]: I0126 14:51:20.818405 4823 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b942d06c-fac8-4546-98a6-f36d0666d0d4-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:21 crc kubenswrapper[4823]: E0126 14:51:21.070917 4823 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Jan 26 14:51:24 crc kubenswrapper[4823]: I0126 14:51:24.456415 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 14:51:24 crc kubenswrapper[4823]: I0126 14:51:24.457099 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 14:51:30 crc kubenswrapper[4823]: I0126 14:51:30.410539 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 26 14:51:30 crc kubenswrapper[4823]: I0126 14:51:30.488673 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 26 14:51:31 crc kubenswrapper[4823]: I0126 14:51:30.731853 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 26 14:51:31 crc kubenswrapper[4823]: I0126 14:51:31.421251 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 26 14:51:31 crc kubenswrapper[4823]: I0126 14:51:31.464680 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 14:51:31 crc kubenswrapper[4823]: I0126 14:51:31.564935 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 26 14:51:31 crc kubenswrapper[4823]: I0126 14:51:31.607143 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 26 14:51:31 crc kubenswrapper[4823]: I0126 14:51:31.610800 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 26 14:51:31 crc kubenswrapper[4823]: I0126 14:51:31.660616 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 14:51:31 crc kubenswrapper[4823]: I0126 14:51:31.692946 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 26 14:51:31 crc kubenswrapper[4823]: I0126 14:51:31.754321 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 26 14:51:31 crc kubenswrapper[4823]: I0126 14:51:31.892729 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 26 14:51:32 crc kubenswrapper[4823]: I0126 14:51:32.233922 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 26 14:51:32 crc kubenswrapper[4823]: I0126 14:51:32.278044 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 26 14:51:32 crc kubenswrapper[4823]: I0126 14:51:32.943283 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.131111 4823 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.249760 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.403768 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.514639 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.537380 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.542659 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.677056 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.679803 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.815077 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.853876 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.866648 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.902928 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 26 14:51:33 crc kubenswrapper[4823]: I0126 14:51:33.974096 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.119478 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.170455 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.223357 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.309627 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.312744 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.335392 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.448761 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.455240 4823 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.455286 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.455347 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.456069 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e29bce9aef66f54f72569a951f1abbfc674216d29cede2d99455fea7142f54f7"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.456214 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e29bce9aef66f54f72569a951f1abbfc674216d29cede2d99455fea7142f54f7" gracePeriod=30 Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.537772 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.538662 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.592461 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.614590 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.618772 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.669791 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.684528 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.734446 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.809059 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.877013 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.882714 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.910784 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 26 14:51:34 crc kubenswrapper[4823]: I0126 14:51:34.942165 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.008040 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.031839 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.045786 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.072196 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.236483 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.263354 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.264138 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.378456 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.379823 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.395791 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.434223 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.480563 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.511312 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.603048 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.659511 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.742312 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.761072 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.784453 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.818145 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.840831 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.872206 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.874226 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.882167 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 26 14:51:35 crc kubenswrapper[4823]: I0126 14:51:35.942554 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 26 14:51:36 crc kubenswrapper[4823]: I0126 14:51:36.152999 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 26 14:51:36 crc kubenswrapper[4823]: I0126 14:51:36.225884 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 14:51:36 crc kubenswrapper[4823]: I0126 14:51:36.227431 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 26 14:51:36 crc kubenswrapper[4823]: I0126 14:51:36.277946 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 26 14:51:36 crc kubenswrapper[4823]: I0126 14:51:36.330709 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 14:51:36 crc kubenswrapper[4823]: I0126 14:51:36.428818 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 26 14:51:36 crc kubenswrapper[4823]: I0126 14:51:36.493588 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 26 14:51:36 crc kubenswrapper[4823]: I0126 14:51:36.500886 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 26 14:51:36 crc kubenswrapper[4823]: I0126 14:51:36.577493 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 26 14:51:36 crc kubenswrapper[4823]: I0126 14:51:36.755827 4823 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 26 14:51:36 crc kubenswrapper[4823]: I0126 14:51:36.872993 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 26 14:51:36 crc kubenswrapper[4823]: I0126 14:51:36.942856 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.014551 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.123629 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.175575 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.304397 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.346853 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.401841 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.539129 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.540629 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.635124 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.696841 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.720349 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.876675 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.878623 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 26 14:51:37 crc kubenswrapper[4823]: I0126 14:51:37.925413 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.017271 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.017550 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.080671 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.196427 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.241494 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.277985 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.292569 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.323611 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.335632 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.433817 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.487581 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.529100 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.703152 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.705587 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.734134 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.740976 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 14:51:38 crc kubenswrapper[4823]: I0126 14:51:38.881279 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 26 14:51:39 crc kubenswrapper[4823]: I0126 14:51:39.002998 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 26 14:51:39 crc kubenswrapper[4823]: I0126 14:51:39.139750 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 26 14:51:39 crc kubenswrapper[4823]: I0126 14:51:39.195173 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 26 14:51:39 crc kubenswrapper[4823]: I0126 14:51:39.202036 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 26 14:51:39 crc kubenswrapper[4823]: I0126 14:51:39.210575 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 26 14:51:39 crc kubenswrapper[4823]: I0126 14:51:39.408569 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 26 14:51:39 crc kubenswrapper[4823]: I0126 14:51:39.612763 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 26 14:51:39 crc kubenswrapper[4823]: I0126 14:51:39.623057 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 26 14:51:39 crc kubenswrapper[4823]: I0126 14:51:39.652097 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 26 14:51:39 crc kubenswrapper[4823]: I0126 14:51:39.746751 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 26 14:51:39 crc kubenswrapper[4823]: I0126 14:51:39.791225 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 26 14:51:39 crc kubenswrapper[4823]: I0126 14:51:39.913509 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.073202 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.272494 4823 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.307432 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.373621 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.439239 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.496521 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.506057 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.516048 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.537688 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.539221 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.552319 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.607749 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.822503 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.825792 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 26 14:51:40 crc kubenswrapper[4823]: I0126 14:51:40.958687 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.008539 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.009346 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.015087 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.051937 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.203707 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.265641 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.380959 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.446155 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.469675 4823 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.470529 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.470498054 podStartE2EDuration="44.470498054s" podCreationTimestamp="2026-01-26 14:50:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:51:20.527871865 +0000 UTC m=+277.213334960" watchObservedRunningTime="2026-01-26 14:51:41.470498054 +0000 UTC m=+298.155961169" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.476379 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6vd9x","openshift-kube-apiserver/kube-apiserver-crc"] Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.476501 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl"] Jan 26 14:51:41 crc kubenswrapper[4823]: E0126 14:51:41.476805 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" containerName="installer" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.476836 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" containerName="installer" Jan 26 14:51:41 crc kubenswrapper[4823]: E0126 14:51:41.476855 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b942d06c-fac8-4546-98a6-f36d0666d0d4" containerName="oauth-openshift" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.476865 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b942d06c-fac8-4546-98a6-f36d0666d0d4" containerName="oauth-openshift" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.477003 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b942d06c-fac8-4546-98a6-f36d0666d0d4" containerName="oauth-openshift" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.477026 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20242eb-5d18-4aed-8862-4d000031d3e9" containerName="installer" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.477182 4823 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ef64e7a1-3b41-43fe-90ef-603abc3e6b63" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.477254 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ef64e7a1-3b41-43fe-90ef-603abc3e6b63" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.477790 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.481880 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.482461 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.485436 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.485844 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.485984 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.486230 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.486232 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.487139 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.487479 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.487826 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.488120 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.488224 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.488462 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.492276 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.496699 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.502140 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.535448 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.53543158 podStartE2EDuration="21.53543158s" podCreationTimestamp="2026-01-26 14:51:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:51:41.534105405 +0000 UTC m=+298.219568520" watchObservedRunningTime="2026-01-26 14:51:41.53543158 +0000 UTC m=+298.220894675" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.570051 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b942d06c-fac8-4546-98a6-f36d0666d0d4" path="/var/lib/kubelet/pods/b942d06c-fac8-4546-98a6-f36d0666d0d4/volumes" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.611146 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.627851 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.627909 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.627936 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.627961 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-user-template-login\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.627986 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-session\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.628011 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-user-template-error\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.628035 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.628064 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.628127 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/326827d0-4111-4c4b-88f2-47ba5553a488-audit-dir\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.628144 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrtdm\" (UniqueName: \"kubernetes.io/projected/326827d0-4111-4c4b-88f2-47ba5553a488-kube-api-access-zrtdm\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.628174 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/326827d0-4111-4c4b-88f2-47ba5553a488-audit-policies\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.628192 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.628212 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.628231 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.722986 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.723035 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.729940 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/326827d0-4111-4c4b-88f2-47ba5553a488-audit-policies\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.729996 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.730026 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.730058 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.731219 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.731318 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.731356 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.731418 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-user-template-login\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.731460 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-session\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.731508 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-user-template-error\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.732011 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.732062 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.732144 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrtdm\" (UniqueName: \"kubernetes.io/projected/326827d0-4111-4c4b-88f2-47ba5553a488-kube-api-access-zrtdm\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.732169 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/326827d0-4111-4c4b-88f2-47ba5553a488-audit-dir\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.732294 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/326827d0-4111-4c4b-88f2-47ba5553a488-audit-dir\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.731140 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.731086 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/326827d0-4111-4c4b-88f2-47ba5553a488-audit-policies\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.734462 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.734770 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-service-ca\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.737120 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-user-template-error\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.737680 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-router-certs\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.737746 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.737954 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-user-template-login\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.738004 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.739519 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.744432 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-session\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.745600 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/326827d0-4111-4c4b-88f2-47ba5553a488-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.755000 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrtdm\" (UniqueName: \"kubernetes.io/projected/326827d0-4111-4c4b-88f2-47ba5553a488-kube-api-access-zrtdm\") pod \"oauth-openshift-7bccf64dbb-q4pfl\" (UID: \"326827d0-4111-4c4b-88f2-47ba5553a488\") " pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.797661 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.830962 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 26 14:51:41 crc kubenswrapper[4823]: I0126 14:51:41.905715 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.021760 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.053002 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.056391 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.056491 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl"] Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.181582 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.186915 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.214574 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.236923 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.246724 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.327171 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.426311 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.428990 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.568721 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.606750 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.619030 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.725740 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" event={"ID":"326827d0-4111-4c4b-88f2-47ba5553a488","Type":"ContainerStarted","Data":"38e3a8dc01c09d0ad05849a873f13b34fc9e48e692f02b041716e8b43fd09535"} Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.725818 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" event={"ID":"326827d0-4111-4c4b-88f2-47ba5553a488","Type":"ContainerStarted","Data":"3102477e11ba685a7c033b0d09bc69cba31a228e6daea032a0249ecff512d21e"} Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.757768 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" podStartSLOduration=50.757742003 podStartE2EDuration="50.757742003s" podCreationTimestamp="2026-01-26 14:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:51:42.754569669 +0000 UTC m=+299.440032784" watchObservedRunningTime="2026-01-26 14:51:42.757742003 +0000 UTC m=+299.443205108" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.796235 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.853407 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.868574 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.871912 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.888142 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.966456 4823 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 26 14:51:42 crc kubenswrapper[4823]: I0126 14:51:42.973727 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.016585 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.045261 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.102914 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.107785 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.132724 4823 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.133142 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://447b3eba5cd93a67fb056b1f43f8e3bba881cfc5276826055b4a13ef238dea8a" gracePeriod=5 Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.179978 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.200893 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.239834 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.246334 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.374187 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.413916 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.428535 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.443389 4823 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.550152 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.561836 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.673460 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.700283 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.714354 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.722551 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.734752 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.741732 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.757799 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.794218 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.815626 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.906338 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.908958 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.918794 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.927443 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.938209 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.974599 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 26 14:51:43 crc kubenswrapper[4823]: I0126 14:51:43.985039 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.049485 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.098414 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.140453 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.178582 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.251010 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.329959 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.399785 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.430547 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.568557 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.661913 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.706473 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.757101 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.848851 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 26 14:51:44 crc kubenswrapper[4823]: I0126 14:51:44.939118 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 26 14:51:45 crc kubenswrapper[4823]: I0126 14:51:45.116724 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 26 14:51:45 crc kubenswrapper[4823]: I0126 14:51:45.119456 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 26 14:51:45 crc kubenswrapper[4823]: I0126 14:51:45.295211 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 26 14:51:45 crc kubenswrapper[4823]: I0126 14:51:45.365919 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 26 14:51:45 crc kubenswrapper[4823]: I0126 14:51:45.607480 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 26 14:51:45 crc kubenswrapper[4823]: I0126 14:51:45.721891 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 26 14:51:45 crc kubenswrapper[4823]: I0126 14:51:45.809632 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 26 14:51:45 crc kubenswrapper[4823]: I0126 14:51:45.887479 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 26 14:51:46 crc kubenswrapper[4823]: I0126 14:51:46.076002 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 26 14:51:46 crc kubenswrapper[4823]: I0126 14:51:46.146932 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 26 14:51:46 crc kubenswrapper[4823]: I0126 14:51:46.148286 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 26 14:51:46 crc kubenswrapper[4823]: I0126 14:51:46.469325 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 26 14:51:46 crc kubenswrapper[4823]: I0126 14:51:46.496279 4823 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 26 14:51:46 crc kubenswrapper[4823]: I0126 14:51:46.583282 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 26 14:51:46 crc kubenswrapper[4823]: I0126 14:51:46.659148 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 26 14:51:46 crc kubenswrapper[4823]: I0126 14:51:46.679856 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 26 14:51:46 crc kubenswrapper[4823]: I0126 14:51:46.891327 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.706556 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.707052 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.767844 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.767941 4823 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="447b3eba5cd93a67fb056b1f43f8e3bba881cfc5276826055b4a13ef238dea8a" exitCode=137 Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.768018 4823 scope.go:117] "RemoveContainer" containerID="447b3eba5cd93a67fb056b1f43f8e3bba881cfc5276826055b4a13ef238dea8a" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.768210 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.788398 4823 scope.go:117] "RemoveContainer" containerID="447b3eba5cd93a67fb056b1f43f8e3bba881cfc5276826055b4a13ef238dea8a" Jan 26 14:51:48 crc kubenswrapper[4823]: E0126 14:51:48.789088 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447b3eba5cd93a67fb056b1f43f8e3bba881cfc5276826055b4a13ef238dea8a\": container with ID starting with 447b3eba5cd93a67fb056b1f43f8e3bba881cfc5276826055b4a13ef238dea8a not found: ID does not exist" containerID="447b3eba5cd93a67fb056b1f43f8e3bba881cfc5276826055b4a13ef238dea8a" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.789169 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447b3eba5cd93a67fb056b1f43f8e3bba881cfc5276826055b4a13ef238dea8a"} err="failed to get container status \"447b3eba5cd93a67fb056b1f43f8e3bba881cfc5276826055b4a13ef238dea8a\": rpc error: code = NotFound desc = could not find container \"447b3eba5cd93a67fb056b1f43f8e3bba881cfc5276826055b4a13ef238dea8a\": container with ID starting with 447b3eba5cd93a67fb056b1f43f8e3bba881cfc5276826055b4a13ef238dea8a not found: ID does not exist" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.848684 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.848779 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.848814 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.848881 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.848873 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.848913 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.848951 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.849038 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.849072 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.849682 4823 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.849712 4823 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.849725 4823 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.849734 4823 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.858465 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:51:48 crc kubenswrapper[4823]: I0126 14:51:48.951427 4823 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 26 14:51:49 crc kubenswrapper[4823]: I0126 14:51:49.570271 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 26 14:51:49 crc kubenswrapper[4823]: I0126 14:51:49.571045 4823 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 26 14:51:49 crc kubenswrapper[4823]: I0126 14:51:49.581864 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 14:51:49 crc kubenswrapper[4823]: I0126 14:51:49.582472 4823 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f7d5d187-1f0a-47b3-9efa-0c89720dae81" Jan 26 14:51:49 crc kubenswrapper[4823]: I0126 14:51:49.585965 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 26 14:51:49 crc kubenswrapper[4823]: I0126 14:51:49.586021 4823 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="f7d5d187-1f0a-47b3-9efa-0c89720dae81" Jan 26 14:52:04 crc kubenswrapper[4823]: I0126 14:52:04.895026 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 14:52:04 crc kubenswrapper[4823]: I0126 14:52:04.897621 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 26 14:52:04 crc kubenswrapper[4823]: I0126 14:52:04.897689 4823 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e29bce9aef66f54f72569a951f1abbfc674216d29cede2d99455fea7142f54f7" exitCode=137 Jan 26 14:52:04 crc kubenswrapper[4823]: I0126 14:52:04.897724 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e29bce9aef66f54f72569a951f1abbfc674216d29cede2d99455fea7142f54f7"} Jan 26 14:52:04 crc kubenswrapper[4823]: I0126 14:52:04.897768 4823 scope.go:117] "RemoveContainer" containerID="e920de0f1ad818fb35888fd9c478e1149eb7524e5cf3a31b45061f806deace3c" Jan 26 14:52:05 crc kubenswrapper[4823]: I0126 14:52:05.904797 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 26 14:52:05 crc kubenswrapper[4823]: I0126 14:52:05.906052 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ddfdd48ced57ec871e67371cb86421ca73fd11a47c376b57029f83eca710b8f7"} Jan 26 14:52:06 crc kubenswrapper[4823]: I0126 14:52:06.389918 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:52:06 crc kubenswrapper[4823]: I0126 14:52:06.426739 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 26 14:52:08 crc kubenswrapper[4823]: I0126 14:52:08.977080 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.112433 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xcg28"] Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.114131 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xcg28" podUID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" containerName="registry-server" containerID="cri-o://98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802" gracePeriod=2 Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.484980 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.626838 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzclv\" (UniqueName: \"kubernetes.io/projected/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-kube-api-access-xzclv\") pod \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\" (UID: \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\") " Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.627068 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-utilities\") pod \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\" (UID: \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\") " Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.627210 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-catalog-content\") pod \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\" (UID: \"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8\") " Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.628088 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-utilities" (OuterVolumeSpecName: "utilities") pod "4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" (UID: "4b0581ed-2fde-46ba-ae27-24b18e0e7ea8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.635962 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-kube-api-access-xzclv" (OuterVolumeSpecName: "kube-api-access-xzclv") pod "4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" (UID: "4b0581ed-2fde-46ba-ae27-24b18e0e7ea8"). InnerVolumeSpecName "kube-api-access-xzclv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.689823 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" (UID: "4b0581ed-2fde-46ba-ae27-24b18e0e7ea8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.729329 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzclv\" (UniqueName: \"kubernetes.io/projected/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-kube-api-access-xzclv\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.729376 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.729391 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.954598 4823 generic.go:334] "Generic (PLEG): container finished" podID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" containerID="98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802" exitCode=0 Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.954682 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcg28" event={"ID":"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8","Type":"ContainerDied","Data":"98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802"} Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.954706 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xcg28" Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.954723 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xcg28" event={"ID":"4b0581ed-2fde-46ba-ae27-24b18e0e7ea8","Type":"ContainerDied","Data":"f5c74124cf0821fa289b30944644b84e45763ad53fe42814b564bd5d1ed13cd4"} Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.954743 4823 scope.go:117] "RemoveContainer" containerID="98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802" Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.984197 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xcg28"] Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.984435 4823 scope.go:117] "RemoveContainer" containerID="55797f0fb43bd2af5b5e51420cdbb19925ddf7d2faa45ae7b2b57eb73a6030e9" Jan 26 14:52:13 crc kubenswrapper[4823]: I0126 14:52:13.988668 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xcg28"] Jan 26 14:52:14 crc kubenswrapper[4823]: I0126 14:52:14.001640 4823 scope.go:117] "RemoveContainer" containerID="fb6d5297933ca50ad8650763cf23a84211f38066ea904200f4142e3746460cd5" Jan 26 14:52:14 crc kubenswrapper[4823]: I0126 14:52:14.036547 4823 scope.go:117] "RemoveContainer" containerID="98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802" Jan 26 14:52:14 crc kubenswrapper[4823]: E0126 14:52:14.038852 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802\": container with ID starting with 98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802 not found: ID does not exist" containerID="98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802" Jan 26 14:52:14 crc kubenswrapper[4823]: I0126 14:52:14.038904 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802"} err="failed to get container status \"98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802\": rpc error: code = NotFound desc = could not find container \"98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802\": container with ID starting with 98323129c6451900535db02e7dd2def416ab34f25feed3d64a3f76333e816802 not found: ID does not exist" Jan 26 14:52:14 crc kubenswrapper[4823]: I0126 14:52:14.038936 4823 scope.go:117] "RemoveContainer" containerID="55797f0fb43bd2af5b5e51420cdbb19925ddf7d2faa45ae7b2b57eb73a6030e9" Jan 26 14:52:14 crc kubenswrapper[4823]: E0126 14:52:14.039310 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55797f0fb43bd2af5b5e51420cdbb19925ddf7d2faa45ae7b2b57eb73a6030e9\": container with ID starting with 55797f0fb43bd2af5b5e51420cdbb19925ddf7d2faa45ae7b2b57eb73a6030e9 not found: ID does not exist" containerID="55797f0fb43bd2af5b5e51420cdbb19925ddf7d2faa45ae7b2b57eb73a6030e9" Jan 26 14:52:14 crc kubenswrapper[4823]: I0126 14:52:14.039385 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55797f0fb43bd2af5b5e51420cdbb19925ddf7d2faa45ae7b2b57eb73a6030e9"} err="failed to get container status \"55797f0fb43bd2af5b5e51420cdbb19925ddf7d2faa45ae7b2b57eb73a6030e9\": rpc error: code = NotFound desc = could not find container \"55797f0fb43bd2af5b5e51420cdbb19925ddf7d2faa45ae7b2b57eb73a6030e9\": container with ID starting with 55797f0fb43bd2af5b5e51420cdbb19925ddf7d2faa45ae7b2b57eb73a6030e9 not found: ID does not exist" Jan 26 14:52:14 crc kubenswrapper[4823]: I0126 14:52:14.039415 4823 scope.go:117] "RemoveContainer" containerID="fb6d5297933ca50ad8650763cf23a84211f38066ea904200f4142e3746460cd5" Jan 26 14:52:14 crc kubenswrapper[4823]: E0126 14:52:14.039768 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb6d5297933ca50ad8650763cf23a84211f38066ea904200f4142e3746460cd5\": container with ID starting with fb6d5297933ca50ad8650763cf23a84211f38066ea904200f4142e3746460cd5 not found: ID does not exist" containerID="fb6d5297933ca50ad8650763cf23a84211f38066ea904200f4142e3746460cd5" Jan 26 14:52:14 crc kubenswrapper[4823]: I0126 14:52:14.039801 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb6d5297933ca50ad8650763cf23a84211f38066ea904200f4142e3746460cd5"} err="failed to get container status \"fb6d5297933ca50ad8650763cf23a84211f38066ea904200f4142e3746460cd5\": rpc error: code = NotFound desc = could not find container \"fb6d5297933ca50ad8650763cf23a84211f38066ea904200f4142e3746460cd5\": container with ID starting with fb6d5297933ca50ad8650763cf23a84211f38066ea904200f4142e3746460cd5 not found: ID does not exist" Jan 26 14:52:14 crc kubenswrapper[4823]: I0126 14:52:14.454626 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:52:14 crc kubenswrapper[4823]: I0126 14:52:14.458653 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.514993 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-psj4l"] Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.515788 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-psj4l" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" containerName="registry-server" containerID="cri-o://2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e" gracePeriod=2 Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.568002 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" path="/var/lib/kubelet/pods/4b0581ed-2fde-46ba-ae27-24b18e0e7ea8/volumes" Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.834479 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-plnn5"] Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.834834 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-plnn5" podUID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" containerName="registry-server" containerID="cri-o://5f153700c84f92c32537d72942efb6b8f32bfafab46bc22dca9ff27f24db09a7" gracePeriod=30 Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.842063 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6jswn"] Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.842343 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6jswn" podUID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" containerName="registry-server" containerID="cri-o://38a848e617ef64ea19440558db5265c2d48b9571cf04741e835b6cc38d80edda" gracePeriod=30 Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.850351 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m7qhz"] Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.850651 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" podUID="b83ec26f-28e8-400b-94f2-e8526e3c0cb3" containerName="marketplace-operator" containerID="cri-o://17a25a841dc22e00936511c31f6e04261b2d3adeab9f54164c9736697042d13b" gracePeriod=30 Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.859360 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m282g"] Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.859978 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m282g" podUID="3efb7df4-2e94-4c83-a793-0fc25d69140e" containerName="registry-server" containerID="cri-o://e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a" gracePeriod=30 Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.871189 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s6bkg"] Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.871509 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s6bkg" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" containerName="registry-server" containerID="cri-o://6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d" gracePeriod=30 Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.873837 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.958036 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.958462 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66da9ec1-7863-4edb-8204-e0ea1812c556-utilities\") pod \"66da9ec1-7863-4edb-8204-e0ea1812c556\" (UID: \"66da9ec1-7863-4edb-8204-e0ea1812c556\") " Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.958541 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nssqw\" (UniqueName: \"kubernetes.io/projected/66da9ec1-7863-4edb-8204-e0ea1812c556-kube-api-access-nssqw\") pod \"66da9ec1-7863-4edb-8204-e0ea1812c556\" (UID: \"66da9ec1-7863-4edb-8204-e0ea1812c556\") " Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.958598 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66da9ec1-7863-4edb-8204-e0ea1812c556-catalog-content\") pod \"66da9ec1-7863-4edb-8204-e0ea1812c556\" (UID: \"66da9ec1-7863-4edb-8204-e0ea1812c556\") " Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.960288 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66da9ec1-7863-4edb-8204-e0ea1812c556-utilities" (OuterVolumeSpecName: "utilities") pod "66da9ec1-7863-4edb-8204-e0ea1812c556" (UID: "66da9ec1-7863-4edb-8204-e0ea1812c556"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:52:15 crc kubenswrapper[4823]: I0126 14:52:15.964935 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66da9ec1-7863-4edb-8204-e0ea1812c556-kube-api-access-nssqw" (OuterVolumeSpecName: "kube-api-access-nssqw") pod "66da9ec1-7863-4edb-8204-e0ea1812c556" (UID: "66da9ec1-7863-4edb-8204-e0ea1812c556"). InnerVolumeSpecName "kube-api-access-nssqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.008204 4823 generic.go:334] "Generic (PLEG): container finished" podID="b83ec26f-28e8-400b-94f2-e8526e3c0cb3" containerID="17a25a841dc22e00936511c31f6e04261b2d3adeab9f54164c9736697042d13b" exitCode=0 Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.008261 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" event={"ID":"b83ec26f-28e8-400b-94f2-e8526e3c0cb3","Type":"ContainerDied","Data":"17a25a841dc22e00936511c31f6e04261b2d3adeab9f54164c9736697042d13b"} Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.043684 4823 generic.go:334] "Generic (PLEG): container finished" podID="66da9ec1-7863-4edb-8204-e0ea1812c556" containerID="2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e" exitCode=0 Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.043762 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psj4l" event={"ID":"66da9ec1-7863-4edb-8204-e0ea1812c556","Type":"ContainerDied","Data":"2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e"} Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.043797 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-psj4l" event={"ID":"66da9ec1-7863-4edb-8204-e0ea1812c556","Type":"ContainerDied","Data":"fe1c9fc9a70d4499dedac4a2b5bf5f3d579b099e61795e6d81b5b23a124a3161"} Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.043818 4823 scope.go:117] "RemoveContainer" containerID="2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.043938 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-psj4l" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.048994 4823 generic.go:334] "Generic (PLEG): container finished" podID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" containerID="38a848e617ef64ea19440558db5265c2d48b9571cf04741e835b6cc38d80edda" exitCode=0 Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.049076 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jswn" event={"ID":"0a7642fa-63ff-41bb-950e-b0d1badff9fe","Type":"ContainerDied","Data":"38a848e617ef64ea19440558db5265c2d48b9571cf04741e835b6cc38d80edda"} Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.052294 4823 generic.go:334] "Generic (PLEG): container finished" podID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" containerID="5f153700c84f92c32537d72942efb6b8f32bfafab46bc22dca9ff27f24db09a7" exitCode=0 Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.052327 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plnn5" event={"ID":"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8","Type":"ContainerDied","Data":"5f153700c84f92c32537d72942efb6b8f32bfafab46bc22dca9ff27f24db09a7"} Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.060019 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66da9ec1-7863-4edb-8204-e0ea1812c556-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.060053 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nssqw\" (UniqueName: \"kubernetes.io/projected/66da9ec1-7863-4edb-8204-e0ea1812c556-kube-api-access-nssqw\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.093444 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66da9ec1-7863-4edb-8204-e0ea1812c556-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66da9ec1-7863-4edb-8204-e0ea1812c556" (UID: "66da9ec1-7863-4edb-8204-e0ea1812c556"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.161653 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66da9ec1-7863-4edb-8204-e0ea1812c556-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.228110 4823 scope.go:117] "RemoveContainer" containerID="eab54a02515f6d9287e27e1cbcd8e7608cdce3635db202a6bb0a4594d5958224" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.272562 4823 scope.go:117] "RemoveContainer" containerID="aa8e7c99a1073bb6b6747c7bea81aa03050184dc67d4665efbcb87917641f4b2" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.276663 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.291886 4823 scope.go:117] "RemoveContainer" containerID="2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e" Jan 26 14:52:16 crc kubenswrapper[4823]: E0126 14:52:16.292450 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e\": container with ID starting with 2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e not found: ID does not exist" containerID="2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.292497 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e"} err="failed to get container status \"2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e\": rpc error: code = NotFound desc = could not find container \"2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e\": container with ID starting with 2d40eefc9ed3685bc3af852a02183bd24ea566ac5946ab6d51da5d1183f9cc0e not found: ID does not exist" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.292530 4823 scope.go:117] "RemoveContainer" containerID="eab54a02515f6d9287e27e1cbcd8e7608cdce3635db202a6bb0a4594d5958224" Jan 26 14:52:16 crc kubenswrapper[4823]: E0126 14:52:16.293065 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab54a02515f6d9287e27e1cbcd8e7608cdce3635db202a6bb0a4594d5958224\": container with ID starting with eab54a02515f6d9287e27e1cbcd8e7608cdce3635db202a6bb0a4594d5958224 not found: ID does not exist" containerID="eab54a02515f6d9287e27e1cbcd8e7608cdce3635db202a6bb0a4594d5958224" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.293132 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab54a02515f6d9287e27e1cbcd8e7608cdce3635db202a6bb0a4594d5958224"} err="failed to get container status \"eab54a02515f6d9287e27e1cbcd8e7608cdce3635db202a6bb0a4594d5958224\": rpc error: code = NotFound desc = could not find container \"eab54a02515f6d9287e27e1cbcd8e7608cdce3635db202a6bb0a4594d5958224\": container with ID starting with eab54a02515f6d9287e27e1cbcd8e7608cdce3635db202a6bb0a4594d5958224 not found: ID does not exist" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.293184 4823 scope.go:117] "RemoveContainer" containerID="aa8e7c99a1073bb6b6747c7bea81aa03050184dc67d4665efbcb87917641f4b2" Jan 26 14:52:16 crc kubenswrapper[4823]: E0126 14:52:16.294188 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa8e7c99a1073bb6b6747c7bea81aa03050184dc67d4665efbcb87917641f4b2\": container with ID starting with aa8e7c99a1073bb6b6747c7bea81aa03050184dc67d4665efbcb87917641f4b2 not found: ID does not exist" containerID="aa8e7c99a1073bb6b6747c7bea81aa03050184dc67d4665efbcb87917641f4b2" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.294219 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa8e7c99a1073bb6b6747c7bea81aa03050184dc67d4665efbcb87917641f4b2"} err="failed to get container status \"aa8e7c99a1073bb6b6747c7bea81aa03050184dc67d4665efbcb87917641f4b2\": rpc error: code = NotFound desc = could not find container \"aa8e7c99a1073bb6b6747c7bea81aa03050184dc67d4665efbcb87917641f4b2\": container with ID starting with aa8e7c99a1073bb6b6747c7bea81aa03050184dc67d4665efbcb87917641f4b2 not found: ID does not exist" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.344656 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.368402 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.376795 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-utilities\") pod \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\" (UID: \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.376851 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwkq8\" (UniqueName: \"kubernetes.io/projected/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-kube-api-access-rwkq8\") pod \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\" (UID: \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.376981 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-catalog-content\") pod \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\" (UID: \"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.377678 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-utilities" (OuterVolumeSpecName: "utilities") pod "a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" (UID: "a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.383657 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-kube-api-access-rwkq8" (OuterVolumeSpecName: "kube-api-access-rwkq8") pod "a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" (UID: "a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8"). InnerVolumeSpecName "kube-api-access-rwkq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.385084 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-psj4l"] Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.385290 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.388485 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-psj4l"] Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.394175 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.406295 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.424569 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" (UID: "a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.477795 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc17803-10bb-4c3c-b89f-4ecb574c2092-catalog-content\") pod \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\" (UID: \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.477856 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55txg\" (UniqueName: \"kubernetes.io/projected/0a7642fa-63ff-41bb-950e-b0d1badff9fe-kube-api-access-55txg\") pod \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\" (UID: \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.477875 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ctl9\" (UniqueName: \"kubernetes.io/projected/6cc17803-10bb-4c3c-b89f-4ecb574c2092-kube-api-access-9ctl9\") pod \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\" (UID: \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.477925 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7642fa-63ff-41bb-950e-b0d1badff9fe-utilities\") pod \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\" (UID: \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.477951 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-marketplace-trusted-ca\") pod \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\" (UID: \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.477989 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7642fa-63ff-41bb-950e-b0d1badff9fe-catalog-content\") pod \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\" (UID: \"0a7642fa-63ff-41bb-950e-b0d1badff9fe\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.478030 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc17803-10bb-4c3c-b89f-4ecb574c2092-utilities\") pod \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\" (UID: \"6cc17803-10bb-4c3c-b89f-4ecb574c2092\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.478086 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrhqq\" (UniqueName: \"kubernetes.io/projected/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-kube-api-access-qrhqq\") pod \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\" (UID: \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.478113 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-marketplace-operator-metrics\") pod \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\" (UID: \"b83ec26f-28e8-400b-94f2-e8526e3c0cb3\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.478316 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.478328 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.478339 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwkq8\" (UniqueName: \"kubernetes.io/projected/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8-kube-api-access-rwkq8\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.479578 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a7642fa-63ff-41bb-950e-b0d1badff9fe-utilities" (OuterVolumeSpecName: "utilities") pod "0a7642fa-63ff-41bb-950e-b0d1badff9fe" (UID: "0a7642fa-63ff-41bb-950e-b0d1badff9fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.481302 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b83ec26f-28e8-400b-94f2-e8526e3c0cb3" (UID: "b83ec26f-28e8-400b-94f2-e8526e3c0cb3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.482336 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cc17803-10bb-4c3c-b89f-4ecb574c2092-utilities" (OuterVolumeSpecName: "utilities") pod "6cc17803-10bb-4c3c-b89f-4ecb574c2092" (UID: "6cc17803-10bb-4c3c-b89f-4ecb574c2092"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.483604 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7642fa-63ff-41bb-950e-b0d1badff9fe-kube-api-access-55txg" (OuterVolumeSpecName: "kube-api-access-55txg") pod "0a7642fa-63ff-41bb-950e-b0d1badff9fe" (UID: "0a7642fa-63ff-41bb-950e-b0d1badff9fe"). InnerVolumeSpecName "kube-api-access-55txg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.484350 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-kube-api-access-qrhqq" (OuterVolumeSpecName: "kube-api-access-qrhqq") pod "b83ec26f-28e8-400b-94f2-e8526e3c0cb3" (UID: "b83ec26f-28e8-400b-94f2-e8526e3c0cb3"). InnerVolumeSpecName "kube-api-access-qrhqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.486355 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b83ec26f-28e8-400b-94f2-e8526e3c0cb3" (UID: "b83ec26f-28e8-400b-94f2-e8526e3c0cb3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.486999 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc17803-10bb-4c3c-b89f-4ecb574c2092-kube-api-access-9ctl9" (OuterVolumeSpecName: "kube-api-access-9ctl9") pod "6cc17803-10bb-4c3c-b89f-4ecb574c2092" (UID: "6cc17803-10bb-4c3c-b89f-4ecb574c2092"). InnerVolumeSpecName "kube-api-access-9ctl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.533714 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a7642fa-63ff-41bb-950e-b0d1badff9fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a7642fa-63ff-41bb-950e-b0d1badff9fe" (UID: "0a7642fa-63ff-41bb-950e-b0d1badff9fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.579853 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xhms\" (UniqueName: \"kubernetes.io/projected/3efb7df4-2e94-4c83-a793-0fc25d69140e-kube-api-access-7xhms\") pod \"3efb7df4-2e94-4c83-a793-0fc25d69140e\" (UID: \"3efb7df4-2e94-4c83-a793-0fc25d69140e\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.579913 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3efb7df4-2e94-4c83-a793-0fc25d69140e-catalog-content\") pod \"3efb7df4-2e94-4c83-a793-0fc25d69140e\" (UID: \"3efb7df4-2e94-4c83-a793-0fc25d69140e\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.579951 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3efb7df4-2e94-4c83-a793-0fc25d69140e-utilities\") pod \"3efb7df4-2e94-4c83-a793-0fc25d69140e\" (UID: \"3efb7df4-2e94-4c83-a793-0fc25d69140e\") " Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.580240 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrhqq\" (UniqueName: \"kubernetes.io/projected/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-kube-api-access-qrhqq\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.580257 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.580272 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55txg\" (UniqueName: \"kubernetes.io/projected/0a7642fa-63ff-41bb-950e-b0d1badff9fe-kube-api-access-55txg\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.580284 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ctl9\" (UniqueName: \"kubernetes.io/projected/6cc17803-10bb-4c3c-b89f-4ecb574c2092-kube-api-access-9ctl9\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.580299 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a7642fa-63ff-41bb-950e-b0d1badff9fe-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.580311 4823 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b83ec26f-28e8-400b-94f2-e8526e3c0cb3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.580321 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a7642fa-63ff-41bb-950e-b0d1badff9fe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.580332 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6cc17803-10bb-4c3c-b89f-4ecb574c2092-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.581156 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3efb7df4-2e94-4c83-a793-0fc25d69140e-utilities" (OuterVolumeSpecName: "utilities") pod "3efb7df4-2e94-4c83-a793-0fc25d69140e" (UID: "3efb7df4-2e94-4c83-a793-0fc25d69140e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.584947 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3efb7df4-2e94-4c83-a793-0fc25d69140e-kube-api-access-7xhms" (OuterVolumeSpecName: "kube-api-access-7xhms") pod "3efb7df4-2e94-4c83-a793-0fc25d69140e" (UID: "3efb7df4-2e94-4c83-a793-0fc25d69140e"). InnerVolumeSpecName "kube-api-access-7xhms". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.609171 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3efb7df4-2e94-4c83-a793-0fc25d69140e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3efb7df4-2e94-4c83-a793-0fc25d69140e" (UID: "3efb7df4-2e94-4c83-a793-0fc25d69140e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.611624 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cc17803-10bb-4c3c-b89f-4ecb574c2092-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6cc17803-10bb-4c3c-b89f-4ecb574c2092" (UID: "6cc17803-10bb-4c3c-b89f-4ecb574c2092"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.681308 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xhms\" (UniqueName: \"kubernetes.io/projected/3efb7df4-2e94-4c83-a793-0fc25d69140e-kube-api-access-7xhms\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.681378 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3efb7df4-2e94-4c83-a793-0fc25d69140e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.681395 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3efb7df4-2e94-4c83-a793-0fc25d69140e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:16 crc kubenswrapper[4823]: I0126 14:52:16.681410 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6cc17803-10bb-4c3c-b89f-4ecb574c2092-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.058539 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" event={"ID":"b83ec26f-28e8-400b-94f2-e8526e3c0cb3","Type":"ContainerDied","Data":"cf6162dd2d4f6de69da7452a71ed284cec79f7b37cd99265731fb9a728e414ec"} Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.058605 4823 scope.go:117] "RemoveContainer" containerID="17a25a841dc22e00936511c31f6e04261b2d3adeab9f54164c9736697042d13b" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.058624 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-m7qhz" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.062829 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6jswn" event={"ID":"0a7642fa-63ff-41bb-950e-b0d1badff9fe","Type":"ContainerDied","Data":"f16e97089fedfdbdf79575c6317a4c28ee1d33ab466828c211ff215631781a97"} Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.062940 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6jswn" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.069774 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-plnn5" event={"ID":"a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8","Type":"ContainerDied","Data":"eaa893f18464459a71f3ce41ef8576ee5bc690675c2f1634aa121fc9d3bbc4f0"} Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.069900 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-plnn5" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.072984 4823 generic.go:334] "Generic (PLEG): container finished" podID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" containerID="6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d" exitCode=0 Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.073056 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6bkg" event={"ID":"6cc17803-10bb-4c3c-b89f-4ecb574c2092","Type":"ContainerDied","Data":"6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d"} Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.073090 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s6bkg" event={"ID":"6cc17803-10bb-4c3c-b89f-4ecb574c2092","Type":"ContainerDied","Data":"09322e77600cdacde6080ac23de2579f3715f738bd060980582d87b49a4f6443"} Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.073191 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s6bkg" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.089909 4823 scope.go:117] "RemoveContainer" containerID="38a848e617ef64ea19440558db5265c2d48b9571cf04741e835b6cc38d80edda" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.090755 4823 generic.go:334] "Generic (PLEG): container finished" podID="3efb7df4-2e94-4c83-a793-0fc25d69140e" containerID="e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a" exitCode=0 Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.090801 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m282g" event={"ID":"3efb7df4-2e94-4c83-a793-0fc25d69140e","Type":"ContainerDied","Data":"e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a"} Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.090839 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m282g" event={"ID":"3efb7df4-2e94-4c83-a793-0fc25d69140e","Type":"ContainerDied","Data":"f4261a1c5bdd851ea3bd103f58bd5a9eec2c5b137e66aec6db2023259688d2dc"} Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.090969 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m282g" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.112582 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m7qhz"] Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.114471 4823 scope.go:117] "RemoveContainer" containerID="a6556acdeaf6d986fdcae4669827fa76313e043a660cda174bc8057e96d0f373" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.116976 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-m7qhz"] Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.124653 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s6bkg"] Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.127972 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s6bkg"] Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.139199 4823 scope.go:117] "RemoveContainer" containerID="a69ded64b41702448bd69868fb0a7e26392896add97153e94e8b57e8d7b43942" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.139399 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m282g"] Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.145444 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m282g"] Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.151869 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-plnn5"] Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.159276 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-plnn5"] Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.162322 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6jswn"] Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.166985 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6jswn"] Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.171857 4823 scope.go:117] "RemoveContainer" containerID="5f153700c84f92c32537d72942efb6b8f32bfafab46bc22dca9ff27f24db09a7" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.191739 4823 scope.go:117] "RemoveContainer" containerID="6aeaff976246703bded7d3117a4239cd95cb315a38cb3c8b567e4ff4b222f742" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.207538 4823 scope.go:117] "RemoveContainer" containerID="f5603c70a203c298561e72ee2ac41cba5e3db21c9114bbbe6ecd58e51ac45d2c" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.223482 4823 scope.go:117] "RemoveContainer" containerID="6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.241346 4823 scope.go:117] "RemoveContainer" containerID="8aa89d6d73b365c23ae7133f8955295be1270d791be99f994f204d6242644189" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.262020 4823 scope.go:117] "RemoveContainer" containerID="b7fa4463ad97977c954e961d6e827e35303c8cdc139b8653e2a3076d232ab766" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.275889 4823 scope.go:117] "RemoveContainer" containerID="6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.276479 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d\": container with ID starting with 6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d not found: ID does not exist" containerID="6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.276541 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d"} err="failed to get container status \"6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d\": rpc error: code = NotFound desc = could not find container \"6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d\": container with ID starting with 6da1a8bf60835a1cde9bad43cbb748a103f1ec9a4d58be5decb71ef0dbfa7b4d not found: ID does not exist" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.276588 4823 scope.go:117] "RemoveContainer" containerID="8aa89d6d73b365c23ae7133f8955295be1270d791be99f994f204d6242644189" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.276943 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa89d6d73b365c23ae7133f8955295be1270d791be99f994f204d6242644189\": container with ID starting with 8aa89d6d73b365c23ae7133f8955295be1270d791be99f994f204d6242644189 not found: ID does not exist" containerID="8aa89d6d73b365c23ae7133f8955295be1270d791be99f994f204d6242644189" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.277064 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa89d6d73b365c23ae7133f8955295be1270d791be99f994f204d6242644189"} err="failed to get container status \"8aa89d6d73b365c23ae7133f8955295be1270d791be99f994f204d6242644189\": rpc error: code = NotFound desc = could not find container \"8aa89d6d73b365c23ae7133f8955295be1270d791be99f994f204d6242644189\": container with ID starting with 8aa89d6d73b365c23ae7133f8955295be1270d791be99f994f204d6242644189 not found: ID does not exist" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.277153 4823 scope.go:117] "RemoveContainer" containerID="b7fa4463ad97977c954e961d6e827e35303c8cdc139b8653e2a3076d232ab766" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.277598 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7fa4463ad97977c954e961d6e827e35303c8cdc139b8653e2a3076d232ab766\": container with ID starting with b7fa4463ad97977c954e961d6e827e35303c8cdc139b8653e2a3076d232ab766 not found: ID does not exist" containerID="b7fa4463ad97977c954e961d6e827e35303c8cdc139b8653e2a3076d232ab766" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.277634 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7fa4463ad97977c954e961d6e827e35303c8cdc139b8653e2a3076d232ab766"} err="failed to get container status \"b7fa4463ad97977c954e961d6e827e35303c8cdc139b8653e2a3076d232ab766\": rpc error: code = NotFound desc = could not find container \"b7fa4463ad97977c954e961d6e827e35303c8cdc139b8653e2a3076d232ab766\": container with ID starting with b7fa4463ad97977c954e961d6e827e35303c8cdc139b8653e2a3076d232ab766 not found: ID does not exist" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.277652 4823 scope.go:117] "RemoveContainer" containerID="e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.291086 4823 scope.go:117] "RemoveContainer" containerID="143aa343ed1d4c2a3acea6d554228e7c338fa875de7979e1b4b871e9a6c17eef" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.306915 4823 scope.go:117] "RemoveContainer" containerID="846b506c694bb6e7fb0e2ad921fdbf3d062d23bab25e22c8e473107a0ea89b76" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.322220 4823 scope.go:117] "RemoveContainer" containerID="e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.322806 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a\": container with ID starting with e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a not found: ID does not exist" containerID="e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.322855 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a"} err="failed to get container status \"e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a\": rpc error: code = NotFound desc = could not find container \"e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a\": container with ID starting with e47820928d90a754b8530754869d0cceeea74f2c6856727f835659b629cbc99a not found: ID does not exist" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.322890 4823 scope.go:117] "RemoveContainer" containerID="143aa343ed1d4c2a3acea6d554228e7c338fa875de7979e1b4b871e9a6c17eef" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.323265 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143aa343ed1d4c2a3acea6d554228e7c338fa875de7979e1b4b871e9a6c17eef\": container with ID starting with 143aa343ed1d4c2a3acea6d554228e7c338fa875de7979e1b4b871e9a6c17eef not found: ID does not exist" containerID="143aa343ed1d4c2a3acea6d554228e7c338fa875de7979e1b4b871e9a6c17eef" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.323392 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143aa343ed1d4c2a3acea6d554228e7c338fa875de7979e1b4b871e9a6c17eef"} err="failed to get container status \"143aa343ed1d4c2a3acea6d554228e7c338fa875de7979e1b4b871e9a6c17eef\": rpc error: code = NotFound desc = could not find container \"143aa343ed1d4c2a3acea6d554228e7c338fa875de7979e1b4b871e9a6c17eef\": container with ID starting with 143aa343ed1d4c2a3acea6d554228e7c338fa875de7979e1b4b871e9a6c17eef not found: ID does not exist" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.323496 4823 scope.go:117] "RemoveContainer" containerID="846b506c694bb6e7fb0e2ad921fdbf3d062d23bab25e22c8e473107a0ea89b76" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.323977 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"846b506c694bb6e7fb0e2ad921fdbf3d062d23bab25e22c8e473107a0ea89b76\": container with ID starting with 846b506c694bb6e7fb0e2ad921fdbf3d062d23bab25e22c8e473107a0ea89b76 not found: ID does not exist" containerID="846b506c694bb6e7fb0e2ad921fdbf3d062d23bab25e22c8e473107a0ea89b76" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.324015 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"846b506c694bb6e7fb0e2ad921fdbf3d062d23bab25e22c8e473107a0ea89b76"} err="failed to get container status \"846b506c694bb6e7fb0e2ad921fdbf3d062d23bab25e22c8e473107a0ea89b76\": rpc error: code = NotFound desc = could not find container \"846b506c694bb6e7fb0e2ad921fdbf3d062d23bab25e22c8e473107a0ea89b76\": container with ID starting with 846b506c694bb6e7fb0e2ad921fdbf3d062d23bab25e22c8e473107a0ea89b76 not found: ID does not exist" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.569601 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" path="/var/lib/kubelet/pods/0a7642fa-63ff-41bb-950e-b0d1badff9fe/volumes" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.570757 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3efb7df4-2e94-4c83-a793-0fc25d69140e" path="/var/lib/kubelet/pods/3efb7df4-2e94-4c83-a793-0fc25d69140e/volumes" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.571412 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" path="/var/lib/kubelet/pods/66da9ec1-7863-4edb-8204-e0ea1812c556/volumes" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.572580 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" path="/var/lib/kubelet/pods/6cc17803-10bb-4c3c-b89f-4ecb574c2092/volumes" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.573263 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" path="/var/lib/kubelet/pods/a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8/volumes" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.574514 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b83ec26f-28e8-400b-94f2-e8526e3c0cb3" path="/var/lib/kubelet/pods/b83ec26f-28e8-400b-94f2-e8526e3c0cb3/volumes" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.925849 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lg88q"] Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926121 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efb7df4-2e94-4c83-a793-0fc25d69140e" containerName="extract-utilities" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926144 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efb7df4-2e94-4c83-a793-0fc25d69140e" containerName="extract-utilities" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926161 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efb7df4-2e94-4c83-a793-0fc25d69140e" containerName="extract-content" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926170 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efb7df4-2e94-4c83-a793-0fc25d69140e" containerName="extract-content" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926179 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3efb7df4-2e94-4c83-a793-0fc25d69140e" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926187 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3efb7df4-2e94-4c83-a793-0fc25d69140e" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926198 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926205 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926219 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b83ec26f-28e8-400b-94f2-e8526e3c0cb3" containerName="marketplace-operator" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926227 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b83ec26f-28e8-400b-94f2-e8526e3c0cb3" containerName="marketplace-operator" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926237 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926244 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926255 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926262 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926274 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" containerName="extract-utilities" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926280 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" containerName="extract-utilities" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926291 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" containerName="extract-utilities" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926299 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" containerName="extract-utilities" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926309 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" containerName="extract-content" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926315 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" containerName="extract-content" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926326 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" containerName="extract-utilities" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926333 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" containerName="extract-utilities" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926343 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" containerName="extract-utilities" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926351 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" containerName="extract-utilities" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926380 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" containerName="extract-content" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926389 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" containerName="extract-content" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926399 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926407 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926418 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" containerName="extract-content" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926426 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" containerName="extract-content" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926438 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926446 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926458 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926466 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926475 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" containerName="extract-content" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926484 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" containerName="extract-content" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926497 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" containerName="extract-utilities" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926505 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" containerName="extract-utilities" Jan 26 14:52:17 crc kubenswrapper[4823]: E0126 14:52:17.926514 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" containerName="extract-content" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926523 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" containerName="extract-content" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926652 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b0581ed-2fde-46ba-ae27-24b18e0e7ea8" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926670 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc17803-10bb-4c3c-b89f-4ecb574c2092" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926681 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926691 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3efb7df4-2e94-4c83-a793-0fc25d69140e" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926702 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="66da9ec1-7863-4edb-8204-e0ea1812c556" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926717 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c3bb35-3c5c-4f1c-b2b1-171e49aaa1b8" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926728 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b83ec26f-28e8-400b-94f2-e8526e3c0cb3" containerName="marketplace-operator" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.926738 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7642fa-63ff-41bb-950e-b0d1badff9fe" containerName="registry-server" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.927676 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.930552 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.930930 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.931517 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 26 14:52:17 crc kubenswrapper[4823]: I0126 14:52:17.941002 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lg88q"] Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.101405 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m5z2\" (UniqueName: \"kubernetes.io/projected/893e9991-2af7-4fd4-842d-70aa260ff39a-kube-api-access-8m5z2\") pod \"redhat-operators-lg88q\" (UID: \"893e9991-2af7-4fd4-842d-70aa260ff39a\") " pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.101533 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/893e9991-2af7-4fd4-842d-70aa260ff39a-catalog-content\") pod \"redhat-operators-lg88q\" (UID: \"893e9991-2af7-4fd4-842d-70aa260ff39a\") " pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.101596 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/893e9991-2af7-4fd4-842d-70aa260ff39a-utilities\") pod \"redhat-operators-lg88q\" (UID: \"893e9991-2af7-4fd4-842d-70aa260ff39a\") " pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.202396 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/893e9991-2af7-4fd4-842d-70aa260ff39a-utilities\") pod \"redhat-operators-lg88q\" (UID: \"893e9991-2af7-4fd4-842d-70aa260ff39a\") " pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.202473 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m5z2\" (UniqueName: \"kubernetes.io/projected/893e9991-2af7-4fd4-842d-70aa260ff39a-kube-api-access-8m5z2\") pod \"redhat-operators-lg88q\" (UID: \"893e9991-2af7-4fd4-842d-70aa260ff39a\") " pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.202502 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/893e9991-2af7-4fd4-842d-70aa260ff39a-catalog-content\") pod \"redhat-operators-lg88q\" (UID: \"893e9991-2af7-4fd4-842d-70aa260ff39a\") " pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.203026 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/893e9991-2af7-4fd4-842d-70aa260ff39a-catalog-content\") pod \"redhat-operators-lg88q\" (UID: \"893e9991-2af7-4fd4-842d-70aa260ff39a\") " pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.203263 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/893e9991-2af7-4fd4-842d-70aa260ff39a-utilities\") pod \"redhat-operators-lg88q\" (UID: \"893e9991-2af7-4fd4-842d-70aa260ff39a\") " pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.224528 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m5z2\" (UniqueName: \"kubernetes.io/projected/893e9991-2af7-4fd4-842d-70aa260ff39a-kube-api-access-8m5z2\") pod \"redhat-operators-lg88q\" (UID: \"893e9991-2af7-4fd4-842d-70aa260ff39a\") " pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.258108 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.446642 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lg88q"] Jan 26 14:52:18 crc kubenswrapper[4823]: W0126 14:52:18.455462 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod893e9991_2af7_4fd4_842d_70aa260ff39a.slice/crio-789a59dae8708daa9ab06cd992d45b839f1076459925f0fe1127c7050fbeb794 WatchSource:0}: Error finding container 789a59dae8708daa9ab06cd992d45b839f1076459925f0fe1127c7050fbeb794: Status 404 returned error can't find the container with id 789a59dae8708daa9ab06cd992d45b839f1076459925f0fe1127c7050fbeb794 Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.920886 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ttpzc"] Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.922199 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.933807 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 26 14:52:18 crc kubenswrapper[4823]: I0126 14:52:18.968411 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ttpzc"] Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.015402 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-utilities\") pod \"certified-operators-ttpzc\" (UID: \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\") " pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.015458 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkqg2\" (UniqueName: \"kubernetes.io/projected/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-kube-api-access-pkqg2\") pod \"certified-operators-ttpzc\" (UID: \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\") " pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.015482 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-catalog-content\") pod \"certified-operators-ttpzc\" (UID: \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\") " pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.120987 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-utilities\") pod \"certified-operators-ttpzc\" (UID: \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\") " pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.121492 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkqg2\" (UniqueName: \"kubernetes.io/projected/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-kube-api-access-pkqg2\") pod \"certified-operators-ttpzc\" (UID: \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\") " pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.121513 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-catalog-content\") pod \"certified-operators-ttpzc\" (UID: \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\") " pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.122552 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-utilities\") pod \"certified-operators-ttpzc\" (UID: \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\") " pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.122600 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-catalog-content\") pod \"certified-operators-ttpzc\" (UID: \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\") " pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.123171 4823 generic.go:334] "Generic (PLEG): container finished" podID="893e9991-2af7-4fd4-842d-70aa260ff39a" containerID="5b932fd8e29cdbe32d7f57e6d9381e77124b2ecf31745d497fbdb973c8e76eed" exitCode=0 Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.123417 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg88q" event={"ID":"893e9991-2af7-4fd4-842d-70aa260ff39a","Type":"ContainerDied","Data":"5b932fd8e29cdbe32d7f57e6d9381e77124b2ecf31745d497fbdb973c8e76eed"} Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.123537 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg88q" event={"ID":"893e9991-2af7-4fd4-842d-70aa260ff39a","Type":"ContainerStarted","Data":"789a59dae8708daa9ab06cd992d45b839f1076459925f0fe1127c7050fbeb794"} Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.151719 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkqg2\" (UniqueName: \"kubernetes.io/projected/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-kube-api-access-pkqg2\") pod \"certified-operators-ttpzc\" (UID: \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\") " pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.259059 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:19 crc kubenswrapper[4823]: I0126 14:52:19.497409 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ttpzc"] Jan 26 14:52:19 crc kubenswrapper[4823]: W0126 14:52:19.502958 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd371a6c1_69f3_4b6d_a68b_7bd70ea0d77d.slice/crio-8659f24b6dcd0f00fc2a01b6b93b80c1f719c64aad920ae60088c1baf137d876 WatchSource:0}: Error finding container 8659f24b6dcd0f00fc2a01b6b93b80c1f719c64aad920ae60088c1baf137d876: Status 404 returned error can't find the container with id 8659f24b6dcd0f00fc2a01b6b93b80c1f719c64aad920ae60088c1baf137d876 Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.134109 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttpzc" event={"ID":"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d","Type":"ContainerStarted","Data":"8659f24b6dcd0f00fc2a01b6b93b80c1f719c64aad920ae60088c1baf137d876"} Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.322913 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bhwd7"] Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.324012 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.328239 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.342800 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhwd7"] Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.441300 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39fa4549-6e37-47f2-b9c6-bc874636ff40-catalog-content\") pod \"redhat-marketplace-bhwd7\" (UID: \"39fa4549-6e37-47f2-b9c6-bc874636ff40\") " pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.441575 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39fa4549-6e37-47f2-b9c6-bc874636ff40-utilities\") pod \"redhat-marketplace-bhwd7\" (UID: \"39fa4549-6e37-47f2-b9c6-bc874636ff40\") " pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.441632 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dddz8\" (UniqueName: \"kubernetes.io/projected/39fa4549-6e37-47f2-b9c6-bc874636ff40-kube-api-access-dddz8\") pod \"redhat-marketplace-bhwd7\" (UID: \"39fa4549-6e37-47f2-b9c6-bc874636ff40\") " pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.543242 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39fa4549-6e37-47f2-b9c6-bc874636ff40-catalog-content\") pod \"redhat-marketplace-bhwd7\" (UID: \"39fa4549-6e37-47f2-b9c6-bc874636ff40\") " pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.543295 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39fa4549-6e37-47f2-b9c6-bc874636ff40-utilities\") pod \"redhat-marketplace-bhwd7\" (UID: \"39fa4549-6e37-47f2-b9c6-bc874636ff40\") " pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.543330 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dddz8\" (UniqueName: \"kubernetes.io/projected/39fa4549-6e37-47f2-b9c6-bc874636ff40-kube-api-access-dddz8\") pod \"redhat-marketplace-bhwd7\" (UID: \"39fa4549-6e37-47f2-b9c6-bc874636ff40\") " pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.543835 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39fa4549-6e37-47f2-b9c6-bc874636ff40-utilities\") pod \"redhat-marketplace-bhwd7\" (UID: \"39fa4549-6e37-47f2-b9c6-bc874636ff40\") " pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.544048 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39fa4549-6e37-47f2-b9c6-bc874636ff40-catalog-content\") pod \"redhat-marketplace-bhwd7\" (UID: \"39fa4549-6e37-47f2-b9c6-bc874636ff40\") " pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.572180 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dddz8\" (UniqueName: \"kubernetes.io/projected/39fa4549-6e37-47f2-b9c6-bc874636ff40-kube-api-access-dddz8\") pod \"redhat-marketplace-bhwd7\" (UID: \"39fa4549-6e37-47f2-b9c6-bc874636ff40\") " pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:20 crc kubenswrapper[4823]: I0126 14:52:20.664549 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.144594 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg88q" event={"ID":"893e9991-2af7-4fd4-842d-70aa260ff39a","Type":"ContainerStarted","Data":"5eed8e26927a4bfb0a9768917990b3cf98f0f0306e209e4a34e29f4c1bb7aaa1"} Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.146775 4823 generic.go:334] "Generic (PLEG): container finished" podID="d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" containerID="b2f8f9297e364d873dba02c04de2ae94584ece9a68959085606a80a6723ace93" exitCode=0 Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.146833 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttpzc" event={"ID":"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d","Type":"ContainerDied","Data":"b2f8f9297e364d873dba02c04de2ae94584ece9a68959085606a80a6723ace93"} Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.162357 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhwd7"] Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.322935 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k65lv"] Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.324539 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.329264 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.345496 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k65lv"] Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.462955 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-catalog-content\") pod \"community-operators-k65lv\" (UID: \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\") " pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.463016 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2scqn\" (UniqueName: \"kubernetes.io/projected/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-kube-api-access-2scqn\") pod \"community-operators-k65lv\" (UID: \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\") " pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.463050 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-utilities\") pod \"community-operators-k65lv\" (UID: \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\") " pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.565755 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-catalog-content\") pod \"community-operators-k65lv\" (UID: \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\") " pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.565838 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2scqn\" (UniqueName: \"kubernetes.io/projected/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-kube-api-access-2scqn\") pod \"community-operators-k65lv\" (UID: \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\") " pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.565902 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-utilities\") pod \"community-operators-k65lv\" (UID: \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\") " pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.566511 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-catalog-content\") pod \"community-operators-k65lv\" (UID: \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\") " pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.566686 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-utilities\") pod \"community-operators-k65lv\" (UID: \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\") " pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.596654 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2scqn\" (UniqueName: \"kubernetes.io/projected/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-kube-api-access-2scqn\") pod \"community-operators-k65lv\" (UID: \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\") " pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.641842 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:21 crc kubenswrapper[4823]: I0126 14:52:21.962577 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k65lv"] Jan 26 14:52:22 crc kubenswrapper[4823]: I0126 14:52:22.154488 4823 generic.go:334] "Generic (PLEG): container finished" podID="39fa4549-6e37-47f2-b9c6-bc874636ff40" containerID="a742e9569338741bdd0336ecbb28adc26bd5adc3555ee2753a58ed392f130721" exitCode=0 Jan 26 14:52:22 crc kubenswrapper[4823]: I0126 14:52:22.154602 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhwd7" event={"ID":"39fa4549-6e37-47f2-b9c6-bc874636ff40","Type":"ContainerDied","Data":"a742e9569338741bdd0336ecbb28adc26bd5adc3555ee2753a58ed392f130721"} Jan 26 14:52:22 crc kubenswrapper[4823]: I0126 14:52:22.155111 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhwd7" event={"ID":"39fa4549-6e37-47f2-b9c6-bc874636ff40","Type":"ContainerStarted","Data":"f525be1a4a8eda4915ec814b4e5dbe90bee5da3de47ae22a21dd6f43cdaa3883"} Jan 26 14:52:22 crc kubenswrapper[4823]: I0126 14:52:22.161196 4823 generic.go:334] "Generic (PLEG): container finished" podID="893e9991-2af7-4fd4-842d-70aa260ff39a" containerID="5eed8e26927a4bfb0a9768917990b3cf98f0f0306e209e4a34e29f4c1bb7aaa1" exitCode=0 Jan 26 14:52:22 crc kubenswrapper[4823]: I0126 14:52:22.161306 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg88q" event={"ID":"893e9991-2af7-4fd4-842d-70aa260ff39a","Type":"ContainerDied","Data":"5eed8e26927a4bfb0a9768917990b3cf98f0f0306e209e4a34e29f4c1bb7aaa1"} Jan 26 14:52:22 crc kubenswrapper[4823]: I0126 14:52:22.167000 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttpzc" event={"ID":"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d","Type":"ContainerStarted","Data":"45ede807e304b93e76243accbb58a6bd0de4c0d09f0b88d8d0d625e10260ed3b"} Jan 26 14:52:22 crc kubenswrapper[4823]: I0126 14:52:22.171875 4823 generic.go:334] "Generic (PLEG): container finished" podID="960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" containerID="be560354847ebdf8d9ba7fc6fdbbd174006b239adb1b582912f4388eb367cb8c" exitCode=0 Jan 26 14:52:22 crc kubenswrapper[4823]: I0126 14:52:22.171956 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k65lv" event={"ID":"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067","Type":"ContainerDied","Data":"be560354847ebdf8d9ba7fc6fdbbd174006b239adb1b582912f4388eb367cb8c"} Jan 26 14:52:22 crc kubenswrapper[4823]: I0126 14:52:22.172003 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k65lv" event={"ID":"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067","Type":"ContainerStarted","Data":"b634d1feb876832bb2b46c06f7286cafa99e085e34b069bf3eb8cc2a0fa17699"} Jan 26 14:52:23 crc kubenswrapper[4823]: I0126 14:52:23.185326 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lg88q" event={"ID":"893e9991-2af7-4fd4-842d-70aa260ff39a","Type":"ContainerStarted","Data":"ed3dfe0d3eee4055d898eed0afd9f9c96d98461a4118980094be48b890ba46bb"} Jan 26 14:52:23 crc kubenswrapper[4823]: I0126 14:52:23.190995 4823 generic.go:334] "Generic (PLEG): container finished" podID="d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" containerID="45ede807e304b93e76243accbb58a6bd0de4c0d09f0b88d8d0d625e10260ed3b" exitCode=0 Jan 26 14:52:23 crc kubenswrapper[4823]: I0126 14:52:23.191084 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttpzc" event={"ID":"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d","Type":"ContainerDied","Data":"45ede807e304b93e76243accbb58a6bd0de4c0d09f0b88d8d0d625e10260ed3b"} Jan 26 14:52:23 crc kubenswrapper[4823]: I0126 14:52:23.195538 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k65lv" event={"ID":"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067","Type":"ContainerStarted","Data":"15f877b7e7fef61406e867ab3a1400c95c0def9aeeb46a6e0519227177593c43"} Jan 26 14:52:23 crc kubenswrapper[4823]: I0126 14:52:23.198738 4823 generic.go:334] "Generic (PLEG): container finished" podID="39fa4549-6e37-47f2-b9c6-bc874636ff40" containerID="ab84fc287342497186b89bb89ce4684cc8bf029f7f63285114623f1ea16dd579" exitCode=0 Jan 26 14:52:23 crc kubenswrapper[4823]: I0126 14:52:23.198793 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhwd7" event={"ID":"39fa4549-6e37-47f2-b9c6-bc874636ff40","Type":"ContainerDied","Data":"ab84fc287342497186b89bb89ce4684cc8bf029f7f63285114623f1ea16dd579"} Jan 26 14:52:23 crc kubenswrapper[4823]: I0126 14:52:23.230701 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lg88q" podStartSLOduration=2.745301413 podStartE2EDuration="6.230681002s" podCreationTimestamp="2026-01-26 14:52:17 +0000 UTC" firstStartedPulling="2026-01-26 14:52:19.126466463 +0000 UTC m=+335.811929568" lastFinishedPulling="2026-01-26 14:52:22.611845892 +0000 UTC m=+339.297309157" observedRunningTime="2026-01-26 14:52:23.208687689 +0000 UTC m=+339.894150794" watchObservedRunningTime="2026-01-26 14:52:23.230681002 +0000 UTC m=+339.916144107" Jan 26 14:52:24 crc kubenswrapper[4823]: I0126 14:52:24.209851 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhwd7" event={"ID":"39fa4549-6e37-47f2-b9c6-bc874636ff40","Type":"ContainerStarted","Data":"458d36e5b546e28adc50e646db10b9509e5d476dc59caf0b61bf92601ebf5944"} Jan 26 14:52:24 crc kubenswrapper[4823]: I0126 14:52:24.212690 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttpzc" event={"ID":"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d","Type":"ContainerStarted","Data":"3185df8bfd366c7103e8286c0e283c93c0c0071fb14b428dae04054ab4e585fd"} Jan 26 14:52:24 crc kubenswrapper[4823]: I0126 14:52:24.217147 4823 generic.go:334] "Generic (PLEG): container finished" podID="960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" containerID="15f877b7e7fef61406e867ab3a1400c95c0def9aeeb46a6e0519227177593c43" exitCode=0 Jan 26 14:52:24 crc kubenswrapper[4823]: I0126 14:52:24.217240 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k65lv" event={"ID":"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067","Type":"ContainerDied","Data":"15f877b7e7fef61406e867ab3a1400c95c0def9aeeb46a6e0519227177593c43"} Jan 26 14:52:24 crc kubenswrapper[4823]: I0126 14:52:24.217291 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k65lv" event={"ID":"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067","Type":"ContainerStarted","Data":"8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff"} Jan 26 14:52:24 crc kubenswrapper[4823]: I0126 14:52:24.231070 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bhwd7" podStartSLOduration=2.761410499 podStartE2EDuration="4.23104861s" podCreationTimestamp="2026-01-26 14:52:20 +0000 UTC" firstStartedPulling="2026-01-26 14:52:22.156829169 +0000 UTC m=+338.842292274" lastFinishedPulling="2026-01-26 14:52:23.62646727 +0000 UTC m=+340.311930385" observedRunningTime="2026-01-26 14:52:24.2282762 +0000 UTC m=+340.913739315" watchObservedRunningTime="2026-01-26 14:52:24.23104861 +0000 UTC m=+340.916511715" Jan 26 14:52:24 crc kubenswrapper[4823]: I0126 14:52:24.255214 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ttpzc" podStartSLOduration=3.74046533 podStartE2EDuration="6.255191796s" podCreationTimestamp="2026-01-26 14:52:18 +0000 UTC" firstStartedPulling="2026-01-26 14:52:21.14876577 +0000 UTC m=+337.834228875" lastFinishedPulling="2026-01-26 14:52:23.663492236 +0000 UTC m=+340.348955341" observedRunningTime="2026-01-26 14:52:24.250281074 +0000 UTC m=+340.935744189" watchObservedRunningTime="2026-01-26 14:52:24.255191796 +0000 UTC m=+340.940654901" Jan 26 14:52:24 crc kubenswrapper[4823]: I0126 14:52:24.274915 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k65lv" podStartSLOduration=1.680487189 podStartE2EDuration="3.274898013s" podCreationTimestamp="2026-01-26 14:52:21 +0000 UTC" firstStartedPulling="2026-01-26 14:52:22.177386291 +0000 UTC m=+338.862849396" lastFinishedPulling="2026-01-26 14:52:23.771797115 +0000 UTC m=+340.457260220" observedRunningTime="2026-01-26 14:52:24.273210954 +0000 UTC m=+340.958674059" watchObservedRunningTime="2026-01-26 14:52:24.274898013 +0000 UTC m=+340.960361118" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.109289 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bjnqv"] Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.110178 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.114131 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.114344 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.131885 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46f1bb1c-25a5-495b-b871-28c248efe429-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bjnqv\" (UID: \"46f1bb1c-25a5-495b-b871-28c248efe429\") " pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.132027 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/46f1bb1c-25a5-495b-b871-28c248efe429-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bjnqv\" (UID: \"46f1bb1c-25a5-495b-b871-28c248efe429\") " pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.132146 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m75vh\" (UniqueName: \"kubernetes.io/projected/46f1bb1c-25a5-495b-b871-28c248efe429-kube-api-access-m75vh\") pod \"marketplace-operator-79b997595-bjnqv\" (UID: \"46f1bb1c-25a5-495b-b871-28c248efe429\") " pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.136560 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4"] Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.136803 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" podUID="c2f8927c-1301-492d-ae9a-487ec70b3038" containerName="route-controller-manager" containerID="cri-o://4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444" gracePeriod=30 Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.150693 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.155658 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bjnqv"] Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.205887 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5hr6d"] Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.206210 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" podUID="7eea18e5-bc89-4c10-a843-c8b374a239a2" containerName="controller-manager" containerID="cri-o://93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711" gracePeriod=30 Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.233613 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/46f1bb1c-25a5-495b-b871-28c248efe429-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bjnqv\" (UID: \"46f1bb1c-25a5-495b-b871-28c248efe429\") " pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.233727 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m75vh\" (UniqueName: \"kubernetes.io/projected/46f1bb1c-25a5-495b-b871-28c248efe429-kube-api-access-m75vh\") pod \"marketplace-operator-79b997595-bjnqv\" (UID: \"46f1bb1c-25a5-495b-b871-28c248efe429\") " pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.233769 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46f1bb1c-25a5-495b-b871-28c248efe429-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bjnqv\" (UID: \"46f1bb1c-25a5-495b-b871-28c248efe429\") " pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.236558 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46f1bb1c-25a5-495b-b871-28c248efe429-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-bjnqv\" (UID: \"46f1bb1c-25a5-495b-b871-28c248efe429\") " pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.243974 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/46f1bb1c-25a5-495b-b871-28c248efe429-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-bjnqv\" (UID: \"46f1bb1c-25a5-495b-b871-28c248efe429\") " pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.292870 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m75vh\" (UniqueName: \"kubernetes.io/projected/46f1bb1c-25a5-495b-b871-28c248efe429-kube-api-access-m75vh\") pod \"marketplace-operator-79b997595-bjnqv\" (UID: \"46f1bb1c-25a5-495b-b871-28c248efe429\") " pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.427060 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.758946 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.884161 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.951638 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4wqp\" (UniqueName: \"kubernetes.io/projected/c2f8927c-1301-492d-ae9a-487ec70b3038-kube-api-access-f4wqp\") pod \"c2f8927c-1301-492d-ae9a-487ec70b3038\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.951838 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f8927c-1301-492d-ae9a-487ec70b3038-config\") pod \"c2f8927c-1301-492d-ae9a-487ec70b3038\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.951894 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2f8927c-1301-492d-ae9a-487ec70b3038-client-ca\") pod \"c2f8927c-1301-492d-ae9a-487ec70b3038\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.951995 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2f8927c-1301-492d-ae9a-487ec70b3038-serving-cert\") pod \"c2f8927c-1301-492d-ae9a-487ec70b3038\" (UID: \"c2f8927c-1301-492d-ae9a-487ec70b3038\") " Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.954894 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f8927c-1301-492d-ae9a-487ec70b3038-config" (OuterVolumeSpecName: "config") pod "c2f8927c-1301-492d-ae9a-487ec70b3038" (UID: "c2f8927c-1301-492d-ae9a-487ec70b3038"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.955755 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2f8927c-1301-492d-ae9a-487ec70b3038-client-ca" (OuterVolumeSpecName: "client-ca") pod "c2f8927c-1301-492d-ae9a-487ec70b3038" (UID: "c2f8927c-1301-492d-ae9a-487ec70b3038"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.961493 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2f8927c-1301-492d-ae9a-487ec70b3038-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c2f8927c-1301-492d-ae9a-487ec70b3038" (UID: "c2f8927c-1301-492d-ae9a-487ec70b3038"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.965975 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2f8927c-1301-492d-ae9a-487ec70b3038-kube-api-access-f4wqp" (OuterVolumeSpecName: "kube-api-access-f4wqp") pod "c2f8927c-1301-492d-ae9a-487ec70b3038" (UID: "c2f8927c-1301-492d-ae9a-487ec70b3038"). InnerVolumeSpecName "kube-api-access-f4wqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:52:25 crc kubenswrapper[4823]: I0126 14:52:25.980984 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-bjnqv"] Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.053949 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eea18e5-bc89-4c10-a843-c8b374a239a2-serving-cert\") pod \"7eea18e5-bc89-4c10-a843-c8b374a239a2\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.054011 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-proxy-ca-bundles\") pod \"7eea18e5-bc89-4c10-a843-c8b374a239a2\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.054142 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-config\") pod \"7eea18e5-bc89-4c10-a843-c8b374a239a2\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.054220 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glbdl\" (UniqueName: \"kubernetes.io/projected/7eea18e5-bc89-4c10-a843-c8b374a239a2-kube-api-access-glbdl\") pod \"7eea18e5-bc89-4c10-a843-c8b374a239a2\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.055300 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-config" (OuterVolumeSpecName: "config") pod "7eea18e5-bc89-4c10-a843-c8b374a239a2" (UID: "7eea18e5-bc89-4c10-a843-c8b374a239a2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.055419 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7eea18e5-bc89-4c10-a843-c8b374a239a2" (UID: "7eea18e5-bc89-4c10-a843-c8b374a239a2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.055444 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-client-ca\") pod \"7eea18e5-bc89-4c10-a843-c8b374a239a2\" (UID: \"7eea18e5-bc89-4c10-a843-c8b374a239a2\") " Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.056094 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-client-ca" (OuterVolumeSpecName: "client-ca") pod "7eea18e5-bc89-4c10-a843-c8b374a239a2" (UID: "7eea18e5-bc89-4c10-a843-c8b374a239a2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.056321 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.056341 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2f8927c-1301-492d-ae9a-487ec70b3038-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.056353 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.056382 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c2f8927c-1301-492d-ae9a-487ec70b3038-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.056393 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eea18e5-bc89-4c10-a843-c8b374a239a2-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.056403 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c2f8927c-1301-492d-ae9a-487ec70b3038-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.056413 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4wqp\" (UniqueName: \"kubernetes.io/projected/c2f8927c-1301-492d-ae9a-487ec70b3038-kube-api-access-f4wqp\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.058656 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eea18e5-bc89-4c10-a843-c8b374a239a2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7eea18e5-bc89-4c10-a843-c8b374a239a2" (UID: "7eea18e5-bc89-4c10-a843-c8b374a239a2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.059926 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eea18e5-bc89-4c10-a843-c8b374a239a2-kube-api-access-glbdl" (OuterVolumeSpecName: "kube-api-access-glbdl") pod "7eea18e5-bc89-4c10-a843-c8b374a239a2" (UID: "7eea18e5-bc89-4c10-a843-c8b374a239a2"). InnerVolumeSpecName "kube-api-access-glbdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.158253 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7eea18e5-bc89-4c10-a843-c8b374a239a2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.158288 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glbdl\" (UniqueName: \"kubernetes.io/projected/7eea18e5-bc89-4c10-a843-c8b374a239a2-kube-api-access-glbdl\") on node \"crc\" DevicePath \"\"" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.234578 4823 generic.go:334] "Generic (PLEG): container finished" podID="c2f8927c-1301-492d-ae9a-487ec70b3038" containerID="4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444" exitCode=0 Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.234706 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" event={"ID":"c2f8927c-1301-492d-ae9a-487ec70b3038","Type":"ContainerDied","Data":"4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444"} Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.234742 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" event={"ID":"c2f8927c-1301-492d-ae9a-487ec70b3038","Type":"ContainerDied","Data":"8dabe89121432398c448fcd83cad66fedbdd4049f61afde59af445d4663c7326"} Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.234761 4823 scope.go:117] "RemoveContainer" containerID="4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.234920 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.240769 4823 generic.go:334] "Generic (PLEG): container finished" podID="7eea18e5-bc89-4c10-a843-c8b374a239a2" containerID="93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711" exitCode=0 Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.240875 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" event={"ID":"7eea18e5-bc89-4c10-a843-c8b374a239a2","Type":"ContainerDied","Data":"93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711"} Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.240924 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.240943 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5hr6d" event={"ID":"7eea18e5-bc89-4c10-a843-c8b374a239a2","Type":"ContainerDied","Data":"302938f753799ff5b91d7742a2ba0c225c306102cf51619c5714375db5e8559d"} Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.245116 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" event={"ID":"46f1bb1c-25a5-495b-b871-28c248efe429","Type":"ContainerStarted","Data":"4c8c088b553fac2127ea42ce4cc5f9b82176affa100c04dea8967367304c9c2a"} Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.245179 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" event={"ID":"46f1bb1c-25a5-495b-b871-28c248efe429","Type":"ContainerStarted","Data":"0dacb1903de42c3511254adee32af92e36075992e11858e19b3f592ab29611c8"} Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.245659 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.247167 4823 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-bjnqv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" start-of-body= Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.247212 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" podUID="46f1bb1c-25a5-495b-b871-28c248efe429" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.62:8080/healthz\": dial tcp 10.217.0.62:8080: connect: connection refused" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.278512 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" podStartSLOduration=1.278488551 podStartE2EDuration="1.278488551s" podCreationTimestamp="2026-01-26 14:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:52:26.273883688 +0000 UTC m=+342.959346793" watchObservedRunningTime="2026-01-26 14:52:26.278488551 +0000 UTC m=+342.963951676" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.290722 4823 scope.go:117] "RemoveContainer" containerID="4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444" Jan 26 14:52:26 crc kubenswrapper[4823]: E0126 14:52:26.292685 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444\": container with ID starting with 4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444 not found: ID does not exist" containerID="4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.292747 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444"} err="failed to get container status \"4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444\": rpc error: code = NotFound desc = could not find container \"4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444\": container with ID starting with 4ce707399f057a77aefbf52ca1dba4af6f9449c1f1a140ce249963aee470f444 not found: ID does not exist" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.292808 4823 scope.go:117] "RemoveContainer" containerID="93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.308530 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4"] Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.313503 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tdvm4"] Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.316274 4823 scope.go:117] "RemoveContainer" containerID="93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711" Jan 26 14:52:26 crc kubenswrapper[4823]: E0126 14:52:26.318444 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711\": container with ID starting with 93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711 not found: ID does not exist" containerID="93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.318492 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711"} err="failed to get container status \"93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711\": rpc error: code = NotFound desc = could not find container \"93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711\": container with ID starting with 93a4c1099e330a56f42af7c99c40c57668b4c142ef1eaea692adcd7a43c95711 not found: ID does not exist" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.327461 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5hr6d"] Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.337188 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5hr6d"] Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.735651 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2"] Jan 26 14:52:26 crc kubenswrapper[4823]: E0126 14:52:26.736061 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eea18e5-bc89-4c10-a843-c8b374a239a2" containerName="controller-manager" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.736081 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eea18e5-bc89-4c10-a843-c8b374a239a2" containerName="controller-manager" Jan 26 14:52:26 crc kubenswrapper[4823]: E0126 14:52:26.736100 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2f8927c-1301-492d-ae9a-487ec70b3038" containerName="route-controller-manager" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.736110 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2f8927c-1301-492d-ae9a-487ec70b3038" containerName="route-controller-manager" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.736257 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eea18e5-bc89-4c10-a843-c8b374a239a2" containerName="controller-manager" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.736278 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2f8927c-1301-492d-ae9a-487ec70b3038" containerName="route-controller-manager" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.736946 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.738868 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-k8fxp"] Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.739485 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.739589 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.739720 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.740080 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.740547 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.740682 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.746157 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.746344 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.746895 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.747234 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.747330 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.747674 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.747761 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.755091 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-k8fxp"] Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.758437 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.771911 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2"] Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.773321 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e52cb6-eea4-443b-aea9-ea6c0279ca10-config\") pod \"route-controller-manager-75fbfcbdd9-fmfp2\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.773390 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-proxy-ca-bundles\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.773425 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrzcz\" (UniqueName: \"kubernetes.io/projected/85e52cb6-eea4-443b-aea9-ea6c0279ca10-kube-api-access-vrzcz\") pod \"route-controller-manager-75fbfcbdd9-fmfp2\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.773455 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms66s\" (UniqueName: \"kubernetes.io/projected/921a3461-a436-4802-b471-7f9081d27f62-kube-api-access-ms66s\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.773486 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-config\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.773527 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85e52cb6-eea4-443b-aea9-ea6c0279ca10-client-ca\") pod \"route-controller-manager-75fbfcbdd9-fmfp2\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.773550 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921a3461-a436-4802-b471-7f9081d27f62-serving-cert\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.773573 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85e52cb6-eea4-443b-aea9-ea6c0279ca10-serving-cert\") pod \"route-controller-manager-75fbfcbdd9-fmfp2\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.773606 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-client-ca\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.874590 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85e52cb6-eea4-443b-aea9-ea6c0279ca10-serving-cert\") pod \"route-controller-manager-75fbfcbdd9-fmfp2\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.874650 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-client-ca\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.874688 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e52cb6-eea4-443b-aea9-ea6c0279ca10-config\") pod \"route-controller-manager-75fbfcbdd9-fmfp2\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.874718 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-proxy-ca-bundles\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.874744 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrzcz\" (UniqueName: \"kubernetes.io/projected/85e52cb6-eea4-443b-aea9-ea6c0279ca10-kube-api-access-vrzcz\") pod \"route-controller-manager-75fbfcbdd9-fmfp2\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.874773 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms66s\" (UniqueName: \"kubernetes.io/projected/921a3461-a436-4802-b471-7f9081d27f62-kube-api-access-ms66s\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.874799 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-config\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.874841 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85e52cb6-eea4-443b-aea9-ea6c0279ca10-client-ca\") pod \"route-controller-manager-75fbfcbdd9-fmfp2\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.876219 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e52cb6-eea4-443b-aea9-ea6c0279ca10-config\") pod \"route-controller-manager-75fbfcbdd9-fmfp2\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.876249 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85e52cb6-eea4-443b-aea9-ea6c0279ca10-client-ca\") pod \"route-controller-manager-75fbfcbdd9-fmfp2\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.876689 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-client-ca\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.876713 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-config\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.876759 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-proxy-ca-bundles\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.877268 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921a3461-a436-4802-b471-7f9081d27f62-serving-cert\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.880028 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85e52cb6-eea4-443b-aea9-ea6c0279ca10-serving-cert\") pod \"route-controller-manager-75fbfcbdd9-fmfp2\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.881057 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921a3461-a436-4802-b471-7f9081d27f62-serving-cert\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.904917 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms66s\" (UniqueName: \"kubernetes.io/projected/921a3461-a436-4802-b471-7f9081d27f62-kube-api-access-ms66s\") pod \"controller-manager-67db6f585-k8fxp\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:26 crc kubenswrapper[4823]: I0126 14:52:26.905478 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrzcz\" (UniqueName: \"kubernetes.io/projected/85e52cb6-eea4-443b-aea9-ea6c0279ca10-kube-api-access-vrzcz\") pod \"route-controller-manager-75fbfcbdd9-fmfp2\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:27 crc kubenswrapper[4823]: I0126 14:52:27.108066 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:27 crc kubenswrapper[4823]: I0126 14:52:27.120219 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:27 crc kubenswrapper[4823]: I0126 14:52:27.282357 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-bjnqv" Jan 26 14:52:27 crc kubenswrapper[4823]: I0126 14:52:27.424680 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2"] Jan 26 14:52:27 crc kubenswrapper[4823]: I0126 14:52:27.479598 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-k8fxp"] Jan 26 14:52:27 crc kubenswrapper[4823]: W0126 14:52:27.497179 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod921a3461_a436_4802_b471_7f9081d27f62.slice/crio-0bb3a013e01df0dc430c57c5d88d88f2be9967f3582e0923cd348f6b1bc8c36b WatchSource:0}: Error finding container 0bb3a013e01df0dc430c57c5d88d88f2be9967f3582e0923cd348f6b1bc8c36b: Status 404 returned error can't find the container with id 0bb3a013e01df0dc430c57c5d88d88f2be9967f3582e0923cd348f6b1bc8c36b Jan 26 14:52:27 crc kubenswrapper[4823]: I0126 14:52:27.584870 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eea18e5-bc89-4c10-a843-c8b374a239a2" path="/var/lib/kubelet/pods/7eea18e5-bc89-4c10-a843-c8b374a239a2/volumes" Jan 26 14:52:27 crc kubenswrapper[4823]: I0126 14:52:27.585623 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2f8927c-1301-492d-ae9a-487ec70b3038" path="/var/lib/kubelet/pods/c2f8927c-1301-492d-ae9a-487ec70b3038/volumes" Jan 26 14:52:28 crc kubenswrapper[4823]: I0126 14:52:28.258927 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:28 crc kubenswrapper[4823]: I0126 14:52:28.259348 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:28 crc kubenswrapper[4823]: I0126 14:52:28.283557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" event={"ID":"921a3461-a436-4802-b471-7f9081d27f62","Type":"ContainerStarted","Data":"2703e1a4915e1af28c8db5aa605058d160813cada8bad2bd6f60065469cd7dd2"} Jan 26 14:52:28 crc kubenswrapper[4823]: I0126 14:52:28.283620 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" event={"ID":"921a3461-a436-4802-b471-7f9081d27f62","Type":"ContainerStarted","Data":"0bb3a013e01df0dc430c57c5d88d88f2be9967f3582e0923cd348f6b1bc8c36b"} Jan 26 14:52:28 crc kubenswrapper[4823]: I0126 14:52:28.284290 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:28 crc kubenswrapper[4823]: I0126 14:52:28.291058 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" event={"ID":"85e52cb6-eea4-443b-aea9-ea6c0279ca10","Type":"ContainerStarted","Data":"57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9"} Jan 26 14:52:28 crc kubenswrapper[4823]: I0126 14:52:28.291133 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:52:28 crc kubenswrapper[4823]: I0126 14:52:28.291148 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" event={"ID":"85e52cb6-eea4-443b-aea9-ea6c0279ca10","Type":"ContainerStarted","Data":"6252656b5aea0be7ea92f5e8a29f0554569600282b3dc8186175ffc550fa73f3"} Jan 26 14:52:28 crc kubenswrapper[4823]: I0126 14:52:28.291332 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:28 crc kubenswrapper[4823]: I0126 14:52:28.345041 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" podStartSLOduration=3.3449869 podStartE2EDuration="3.3449869s" podCreationTimestamp="2026-01-26 14:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:52:28.31790421 +0000 UTC m=+345.003367325" watchObservedRunningTime="2026-01-26 14:52:28.3449869 +0000 UTC m=+345.030450015" Jan 26 14:52:28 crc kubenswrapper[4823]: I0126 14:52:28.346934 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" podStartSLOduration=3.346924315 podStartE2EDuration="3.346924315s" podCreationTimestamp="2026-01-26 14:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:52:28.340976564 +0000 UTC m=+345.026439669" watchObservedRunningTime="2026-01-26 14:52:28.346924315 +0000 UTC m=+345.032387420" Jan 26 14:52:28 crc kubenswrapper[4823]: I0126 14:52:28.373522 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:52:29 crc kubenswrapper[4823]: I0126 14:52:29.259519 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:29 crc kubenswrapper[4823]: I0126 14:52:29.260030 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:29 crc kubenswrapper[4823]: I0126 14:52:29.314981 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:29 crc kubenswrapper[4823]: I0126 14:52:29.355078 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lg88q" podUID="893e9991-2af7-4fd4-842d-70aa260ff39a" containerName="registry-server" probeResult="failure" output=< Jan 26 14:52:29 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 14:52:29 crc kubenswrapper[4823]: > Jan 26 14:52:29 crc kubenswrapper[4823]: I0126 14:52:29.384676 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 14:52:30 crc kubenswrapper[4823]: I0126 14:52:30.665728 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:30 crc kubenswrapper[4823]: I0126 14:52:30.666169 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:30 crc kubenswrapper[4823]: I0126 14:52:30.720257 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:31 crc kubenswrapper[4823]: I0126 14:52:31.355760 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 14:52:31 crc kubenswrapper[4823]: I0126 14:52:31.642963 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:31 crc kubenswrapper[4823]: I0126 14:52:31.643037 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:31 crc kubenswrapper[4823]: I0126 14:52:31.688837 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:32 crc kubenswrapper[4823]: I0126 14:52:32.368015 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k65lv" Jan 26 14:52:38 crc kubenswrapper[4823]: I0126 14:52:38.310858 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:52:38 crc kubenswrapper[4823]: I0126 14:52:38.358120 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lg88q" Jan 26 14:53:00 crc kubenswrapper[4823]: I0126 14:53:00.863722 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-n6rwb"] Jan 26 14:53:00 crc kubenswrapper[4823]: I0126 14:53:00.865857 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:00 crc kubenswrapper[4823]: I0126 14:53:00.880215 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-n6rwb"] Jan 26 14:53:00 crc kubenswrapper[4823]: I0126 14:53:00.905021 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4fff3391-dc10-4c2b-8868-40123c8147e6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:00 crc kubenswrapper[4823]: I0126 14:53:00.905126 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4fff3391-dc10-4c2b-8868-40123c8147e6-bound-sa-token\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:00 crc kubenswrapper[4823]: I0126 14:53:00.905193 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4fff3391-dc10-4c2b-8868-40123c8147e6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:00 crc kubenswrapper[4823]: I0126 14:53:00.905235 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4fh8\" (UniqueName: \"kubernetes.io/projected/4fff3391-dc10-4c2b-8868-40123c8147e6-kube-api-access-j4fh8\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:00 crc kubenswrapper[4823]: I0126 14:53:00.905275 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4fff3391-dc10-4c2b-8868-40123c8147e6-trusted-ca\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:00 crc kubenswrapper[4823]: I0126 14:53:00.905524 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4fff3391-dc10-4c2b-8868-40123c8147e6-registry-certificates\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:00 crc kubenswrapper[4823]: I0126 14:53:00.905635 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:00 crc kubenswrapper[4823]: I0126 14:53:00.905722 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4fff3391-dc10-4c2b-8868-40123c8147e6-registry-tls\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:00 crc kubenswrapper[4823]: I0126 14:53:00.933673 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.007773 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4fff3391-dc10-4c2b-8868-40123c8147e6-registry-tls\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.007867 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4fff3391-dc10-4c2b-8868-40123c8147e6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.007914 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4fff3391-dc10-4c2b-8868-40123c8147e6-bound-sa-token\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.007936 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4fff3391-dc10-4c2b-8868-40123c8147e6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.007963 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4fh8\" (UniqueName: \"kubernetes.io/projected/4fff3391-dc10-4c2b-8868-40123c8147e6-kube-api-access-j4fh8\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.007987 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4fff3391-dc10-4c2b-8868-40123c8147e6-trusted-ca\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.008024 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4fff3391-dc10-4c2b-8868-40123c8147e6-registry-certificates\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.008522 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4fff3391-dc10-4c2b-8868-40123c8147e6-ca-trust-extracted\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.009641 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4fff3391-dc10-4c2b-8868-40123c8147e6-trusted-ca\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.009837 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4fff3391-dc10-4c2b-8868-40123c8147e6-registry-certificates\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.015751 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4fff3391-dc10-4c2b-8868-40123c8147e6-installation-pull-secrets\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.015911 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4fff3391-dc10-4c2b-8868-40123c8147e6-registry-tls\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.031699 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4fh8\" (UniqueName: \"kubernetes.io/projected/4fff3391-dc10-4c2b-8868-40123c8147e6-kube-api-access-j4fh8\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.032174 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4fff3391-dc10-4c2b-8868-40123c8147e6-bound-sa-token\") pod \"image-registry-66df7c8f76-n6rwb\" (UID: \"4fff3391-dc10-4c2b-8868-40123c8147e6\") " pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.184952 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:01 crc kubenswrapper[4823]: I0126 14:53:01.622340 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-n6rwb"] Jan 26 14:53:02 crc kubenswrapper[4823]: I0126 14:53:02.507426 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" event={"ID":"4fff3391-dc10-4c2b-8868-40123c8147e6","Type":"ContainerStarted","Data":"47dd1dbe6972611a9e596d653f5ff839810865eeca1221cffd732a9221f23fe8"} Jan 26 14:53:02 crc kubenswrapper[4823]: I0126 14:53:02.507712 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" event={"ID":"4fff3391-dc10-4c2b-8868-40123c8147e6","Type":"ContainerStarted","Data":"8465fc808bc32e7e840dfd08a7fdb7bf19360b8ffd36c6bd39a434f0b0524931"} Jan 26 14:53:02 crc kubenswrapper[4823]: I0126 14:53:02.507826 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:02 crc kubenswrapper[4823]: I0126 14:53:02.538770 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" podStartSLOduration=2.5387331570000002 podStartE2EDuration="2.538733157s" podCreationTimestamp="2026-01-26 14:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:53:02.531502168 +0000 UTC m=+379.216965283" watchObservedRunningTime="2026-01-26 14:53:02.538733157 +0000 UTC m=+379.224196292" Jan 26 14:53:04 crc kubenswrapper[4823]: I0126 14:53:04.507950 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:53:04 crc kubenswrapper[4823]: I0126 14:53:04.508059 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:53:05 crc kubenswrapper[4823]: I0126 14:53:05.666476 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-k8fxp"] Jan 26 14:53:05 crc kubenswrapper[4823]: I0126 14:53:05.667390 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" podUID="921a3461-a436-4802-b471-7f9081d27f62" containerName="controller-manager" containerID="cri-o://2703e1a4915e1af28c8db5aa605058d160813cada8bad2bd6f60065469cd7dd2" gracePeriod=30 Jan 26 14:53:05 crc kubenswrapper[4823]: I0126 14:53:05.792070 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2"] Jan 26 14:53:05 crc kubenswrapper[4823]: I0126 14:53:05.792424 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" podUID="85e52cb6-eea4-443b-aea9-ea6c0279ca10" containerName="route-controller-manager" containerID="cri-o://57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9" gracePeriod=30 Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.197501 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.304242 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrzcz\" (UniqueName: \"kubernetes.io/projected/85e52cb6-eea4-443b-aea9-ea6c0279ca10-kube-api-access-vrzcz\") pod \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.304322 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85e52cb6-eea4-443b-aea9-ea6c0279ca10-client-ca\") pod \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.304976 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85e52cb6-eea4-443b-aea9-ea6c0279ca10-serving-cert\") pod \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.305262 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e52cb6-eea4-443b-aea9-ea6c0279ca10-config\") pod \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\" (UID: \"85e52cb6-eea4-443b-aea9-ea6c0279ca10\") " Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.305604 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85e52cb6-eea4-443b-aea9-ea6c0279ca10-client-ca" (OuterVolumeSpecName: "client-ca") pod "85e52cb6-eea4-443b-aea9-ea6c0279ca10" (UID: "85e52cb6-eea4-443b-aea9-ea6c0279ca10"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.306116 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85e52cb6-eea4-443b-aea9-ea6c0279ca10-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.306513 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85e52cb6-eea4-443b-aea9-ea6c0279ca10-config" (OuterVolumeSpecName: "config") pod "85e52cb6-eea4-443b-aea9-ea6c0279ca10" (UID: "85e52cb6-eea4-443b-aea9-ea6c0279ca10"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.312671 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e52cb6-eea4-443b-aea9-ea6c0279ca10-kube-api-access-vrzcz" (OuterVolumeSpecName: "kube-api-access-vrzcz") pod "85e52cb6-eea4-443b-aea9-ea6c0279ca10" (UID: "85e52cb6-eea4-443b-aea9-ea6c0279ca10"). InnerVolumeSpecName "kube-api-access-vrzcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.317134 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e52cb6-eea4-443b-aea9-ea6c0279ca10-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85e52cb6-eea4-443b-aea9-ea6c0279ca10" (UID: "85e52cb6-eea4-443b-aea9-ea6c0279ca10"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.407754 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85e52cb6-eea4-443b-aea9-ea6c0279ca10-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.407794 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrzcz\" (UniqueName: \"kubernetes.io/projected/85e52cb6-eea4-443b-aea9-ea6c0279ca10-kube-api-access-vrzcz\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.407811 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85e52cb6-eea4-443b-aea9-ea6c0279ca10-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.535822 4823 generic.go:334] "Generic (PLEG): container finished" podID="85e52cb6-eea4-443b-aea9-ea6c0279ca10" containerID="57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9" exitCode=0 Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.535901 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" event={"ID":"85e52cb6-eea4-443b-aea9-ea6c0279ca10","Type":"ContainerDied","Data":"57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9"} Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.536004 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" event={"ID":"85e52cb6-eea4-443b-aea9-ea6c0279ca10","Type":"ContainerDied","Data":"6252656b5aea0be7ea92f5e8a29f0554569600282b3dc8186175ffc550fa73f3"} Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.536032 4823 scope.go:117] "RemoveContainer" containerID="57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.536529 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.538464 4823 generic.go:334] "Generic (PLEG): container finished" podID="921a3461-a436-4802-b471-7f9081d27f62" containerID="2703e1a4915e1af28c8db5aa605058d160813cada8bad2bd6f60065469cd7dd2" exitCode=0 Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.538501 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" event={"ID":"921a3461-a436-4802-b471-7f9081d27f62","Type":"ContainerDied","Data":"2703e1a4915e1af28c8db5aa605058d160813cada8bad2bd6f60065469cd7dd2"} Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.562468 4823 scope.go:117] "RemoveContainer" containerID="57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9" Jan 26 14:53:06 crc kubenswrapper[4823]: E0126 14:53:06.563750 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9\": container with ID starting with 57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9 not found: ID does not exist" containerID="57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.563790 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9"} err="failed to get container status \"57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9\": rpc error: code = NotFound desc = could not find container \"57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9\": container with ID starting with 57a4d2462ee90e10627d7de559978eb6d0e47cafab76d1796ae8861eb57a47b9 not found: ID does not exist" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.574244 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2"] Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.577937 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75fbfcbdd9-fmfp2"] Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.578871 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.613137 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-proxy-ca-bundles\") pod \"921a3461-a436-4802-b471-7f9081d27f62\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.614252 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "921a3461-a436-4802-b471-7f9081d27f62" (UID: "921a3461-a436-4802-b471-7f9081d27f62"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.614330 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921a3461-a436-4802-b471-7f9081d27f62-serving-cert\") pod \"921a3461-a436-4802-b471-7f9081d27f62\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.614401 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-config\") pod \"921a3461-a436-4802-b471-7f9081d27f62\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.614501 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms66s\" (UniqueName: \"kubernetes.io/projected/921a3461-a436-4802-b471-7f9081d27f62-kube-api-access-ms66s\") pod \"921a3461-a436-4802-b471-7f9081d27f62\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.614531 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-client-ca\") pod \"921a3461-a436-4802-b471-7f9081d27f62\" (UID: \"921a3461-a436-4802-b471-7f9081d27f62\") " Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.614915 4823 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.618996 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-client-ca" (OuterVolumeSpecName: "client-ca") pod "921a3461-a436-4802-b471-7f9081d27f62" (UID: "921a3461-a436-4802-b471-7f9081d27f62"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.619690 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-config" (OuterVolumeSpecName: "config") pod "921a3461-a436-4802-b471-7f9081d27f62" (UID: "921a3461-a436-4802-b471-7f9081d27f62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.619787 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/921a3461-a436-4802-b471-7f9081d27f62-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "921a3461-a436-4802-b471-7f9081d27f62" (UID: "921a3461-a436-4802-b471-7f9081d27f62"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.622703 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/921a3461-a436-4802-b471-7f9081d27f62-kube-api-access-ms66s" (OuterVolumeSpecName: "kube-api-access-ms66s") pod "921a3461-a436-4802-b471-7f9081d27f62" (UID: "921a3461-a436-4802-b471-7f9081d27f62"). InnerVolumeSpecName "kube-api-access-ms66s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.716610 4823 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/921a3461-a436-4802-b471-7f9081d27f62-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.716669 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.716688 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms66s\" (UniqueName: \"kubernetes.io/projected/921a3461-a436-4802-b471-7f9081d27f62-kube-api-access-ms66s\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.716706 4823 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/921a3461-a436-4802-b471-7f9081d27f62-client-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.765638 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9c8d4595b-mwrsz"] Jan 26 14:53:06 crc kubenswrapper[4823]: E0126 14:53:06.766012 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="921a3461-a436-4802-b471-7f9081d27f62" containerName="controller-manager" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.766039 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="921a3461-a436-4802-b471-7f9081d27f62" containerName="controller-manager" Jan 26 14:53:06 crc kubenswrapper[4823]: E0126 14:53:06.766059 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e52cb6-eea4-443b-aea9-ea6c0279ca10" containerName="route-controller-manager" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.766068 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e52cb6-eea4-443b-aea9-ea6c0279ca10" containerName="route-controller-manager" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.766210 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e52cb6-eea4-443b-aea9-ea6c0279ca10" containerName="route-controller-manager" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.766233 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="921a3461-a436-4802-b471-7f9081d27f62" containerName="controller-manager" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.766872 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.778633 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9c8d4595b-mwrsz"] Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.818420 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlfdn\" (UniqueName: \"kubernetes.io/projected/e759f682-8fa0-4299-bf8c-bdc87ac6a240-kube-api-access-vlfdn\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.818477 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e759f682-8fa0-4299-bf8c-bdc87ac6a240-serving-cert\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.818511 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e759f682-8fa0-4299-bf8c-bdc87ac6a240-client-ca\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.818535 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e759f682-8fa0-4299-bf8c-bdc87ac6a240-proxy-ca-bundles\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.818623 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e759f682-8fa0-4299-bf8c-bdc87ac6a240-config\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.920241 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlfdn\" (UniqueName: \"kubernetes.io/projected/e759f682-8fa0-4299-bf8c-bdc87ac6a240-kube-api-access-vlfdn\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.920778 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e759f682-8fa0-4299-bf8c-bdc87ac6a240-serving-cert\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.920904 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e759f682-8fa0-4299-bf8c-bdc87ac6a240-client-ca\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.921003 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e759f682-8fa0-4299-bf8c-bdc87ac6a240-proxy-ca-bundles\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.921108 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e759f682-8fa0-4299-bf8c-bdc87ac6a240-config\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.922658 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e759f682-8fa0-4299-bf8c-bdc87ac6a240-client-ca\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.923078 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e759f682-8fa0-4299-bf8c-bdc87ac6a240-config\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.923455 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e759f682-8fa0-4299-bf8c-bdc87ac6a240-proxy-ca-bundles\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.925673 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e759f682-8fa0-4299-bf8c-bdc87ac6a240-serving-cert\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:06 crc kubenswrapper[4823]: I0126 14:53:06.942147 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlfdn\" (UniqueName: \"kubernetes.io/projected/e759f682-8fa0-4299-bf8c-bdc87ac6a240-kube-api-access-vlfdn\") pod \"controller-manager-9c8d4595b-mwrsz\" (UID: \"e759f682-8fa0-4299-bf8c-bdc87ac6a240\") " pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.110575 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.360160 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9c8d4595b-mwrsz"] Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.546552 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" event={"ID":"921a3461-a436-4802-b471-7f9081d27f62","Type":"ContainerDied","Data":"0bb3a013e01df0dc430c57c5d88d88f2be9967f3582e0923cd348f6b1bc8c36b"} Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.546612 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67db6f585-k8fxp" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.546640 4823 scope.go:117] "RemoveContainer" containerID="2703e1a4915e1af28c8db5aa605058d160813cada8bad2bd6f60065469cd7dd2" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.549733 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" event={"ID":"e759f682-8fa0-4299-bf8c-bdc87ac6a240","Type":"ContainerStarted","Data":"335fbc5975d26d69828fc6515cda8e42d53c0e882945157233946e780a8726f2"} Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.549787 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" event={"ID":"e759f682-8fa0-4299-bf8c-bdc87ac6a240","Type":"ContainerStarted","Data":"f2b7e7ecd64a82a105679cb53d88d655f8813f466c5d5dcb5645667ff8c4d057"} Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.550039 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.551204 4823 patch_prober.go:28] interesting pod/controller-manager-9c8d4595b-mwrsz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.551266 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" podUID="e759f682-8fa0-4299-bf8c-bdc87ac6a240" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.569862 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85e52cb6-eea4-443b-aea9-ea6c0279ca10" path="/var/lib/kubelet/pods/85e52cb6-eea4-443b-aea9-ea6c0279ca10/volumes" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.580349 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" podStartSLOduration=2.580326859 podStartE2EDuration="2.580326859s" podCreationTimestamp="2026-01-26 14:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:53:07.576771907 +0000 UTC m=+384.262235002" watchObservedRunningTime="2026-01-26 14:53:07.580326859 +0000 UTC m=+384.265789964" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.595901 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-k8fxp"] Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.599923 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67db6f585-k8fxp"] Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.768750 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5"] Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.769724 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.772850 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.773004 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.773042 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.773186 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.773265 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.774604 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.788627 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5"] Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.836649 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20ec79d0-3e94-42a6-93ca-57b0c4f7416b-client-ca\") pod \"route-controller-manager-67bf9f4dd5-q94x5\" (UID: \"20ec79d0-3e94-42a6-93ca-57b0c4f7416b\") " pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.837173 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20ec79d0-3e94-42a6-93ca-57b0c4f7416b-config\") pod \"route-controller-manager-67bf9f4dd5-q94x5\" (UID: \"20ec79d0-3e94-42a6-93ca-57b0c4f7416b\") " pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.837203 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ec79d0-3e94-42a6-93ca-57b0c4f7416b-serving-cert\") pod \"route-controller-manager-67bf9f4dd5-q94x5\" (UID: \"20ec79d0-3e94-42a6-93ca-57b0c4f7416b\") " pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.837287 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trdff\" (UniqueName: \"kubernetes.io/projected/20ec79d0-3e94-42a6-93ca-57b0c4f7416b-kube-api-access-trdff\") pod \"route-controller-manager-67bf9f4dd5-q94x5\" (UID: \"20ec79d0-3e94-42a6-93ca-57b0c4f7416b\") " pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.938250 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20ec79d0-3e94-42a6-93ca-57b0c4f7416b-config\") pod \"route-controller-manager-67bf9f4dd5-q94x5\" (UID: \"20ec79d0-3e94-42a6-93ca-57b0c4f7416b\") " pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.938317 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ec79d0-3e94-42a6-93ca-57b0c4f7416b-serving-cert\") pod \"route-controller-manager-67bf9f4dd5-q94x5\" (UID: \"20ec79d0-3e94-42a6-93ca-57b0c4f7416b\") " pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.938390 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trdff\" (UniqueName: \"kubernetes.io/projected/20ec79d0-3e94-42a6-93ca-57b0c4f7416b-kube-api-access-trdff\") pod \"route-controller-manager-67bf9f4dd5-q94x5\" (UID: \"20ec79d0-3e94-42a6-93ca-57b0c4f7416b\") " pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.938433 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20ec79d0-3e94-42a6-93ca-57b0c4f7416b-client-ca\") pod \"route-controller-manager-67bf9f4dd5-q94x5\" (UID: \"20ec79d0-3e94-42a6-93ca-57b0c4f7416b\") " pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.939331 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20ec79d0-3e94-42a6-93ca-57b0c4f7416b-client-ca\") pod \"route-controller-manager-67bf9f4dd5-q94x5\" (UID: \"20ec79d0-3e94-42a6-93ca-57b0c4f7416b\") " pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.941063 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20ec79d0-3e94-42a6-93ca-57b0c4f7416b-config\") pod \"route-controller-manager-67bf9f4dd5-q94x5\" (UID: \"20ec79d0-3e94-42a6-93ca-57b0c4f7416b\") " pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.946070 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ec79d0-3e94-42a6-93ca-57b0c4f7416b-serving-cert\") pod \"route-controller-manager-67bf9f4dd5-q94x5\" (UID: \"20ec79d0-3e94-42a6-93ca-57b0c4f7416b\") " pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:07 crc kubenswrapper[4823]: I0126 14:53:07.961482 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trdff\" (UniqueName: \"kubernetes.io/projected/20ec79d0-3e94-42a6-93ca-57b0c4f7416b-kube-api-access-trdff\") pod \"route-controller-manager-67bf9f4dd5-q94x5\" (UID: \"20ec79d0-3e94-42a6-93ca-57b0c4f7416b\") " pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:08 crc kubenswrapper[4823]: I0126 14:53:08.116546 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:08 crc kubenswrapper[4823]: I0126 14:53:08.353255 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5"] Jan 26 14:53:08 crc kubenswrapper[4823]: W0126 14:53:08.357946 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20ec79d0_3e94_42a6_93ca_57b0c4f7416b.slice/crio-22955182e4d479d30936b2273989661191128a80f253576264ce242278df400a WatchSource:0}: Error finding container 22955182e4d479d30936b2273989661191128a80f253576264ce242278df400a: Status 404 returned error can't find the container with id 22955182e4d479d30936b2273989661191128a80f253576264ce242278df400a Jan 26 14:53:08 crc kubenswrapper[4823]: I0126 14:53:08.561184 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" event={"ID":"20ec79d0-3e94-42a6-93ca-57b0c4f7416b","Type":"ContainerStarted","Data":"22955182e4d479d30936b2273989661191128a80f253576264ce242278df400a"} Jan 26 14:53:08 crc kubenswrapper[4823]: I0126 14:53:08.565590 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" Jan 26 14:53:09 crc kubenswrapper[4823]: I0126 14:53:09.569740 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="921a3461-a436-4802-b471-7f9081d27f62" path="/var/lib/kubelet/pods/921a3461-a436-4802-b471-7f9081d27f62/volumes" Jan 26 14:53:09 crc kubenswrapper[4823]: I0126 14:53:09.570802 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" event={"ID":"20ec79d0-3e94-42a6-93ca-57b0c4f7416b","Type":"ContainerStarted","Data":"1de907817d31aa0cee167fa47f11c89fc98eeb6a1f2189b4baadd9bc3f22ca33"} Jan 26 14:53:09 crc kubenswrapper[4823]: I0126 14:53:09.592590 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" podStartSLOduration=4.592569956 podStartE2EDuration="4.592569956s" podCreationTimestamp="2026-01-26 14:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:53:09.590832336 +0000 UTC m=+386.276295431" watchObservedRunningTime="2026-01-26 14:53:09.592569956 +0000 UTC m=+386.278033061" Jan 26 14:53:10 crc kubenswrapper[4823]: I0126 14:53:10.581665 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:10 crc kubenswrapper[4823]: I0126 14:53:10.589085 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67bf9f4dd5-q94x5" Jan 26 14:53:21 crc kubenswrapper[4823]: I0126 14:53:21.194605 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" Jan 26 14:53:21 crc kubenswrapper[4823]: I0126 14:53:21.266266 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pbvlk"] Jan 26 14:53:34 crc kubenswrapper[4823]: I0126 14:53:34.508577 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:53:34 crc kubenswrapper[4823]: I0126 14:53:34.509106 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:53:46 crc kubenswrapper[4823]: I0126 14:53:46.306633 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" podUID="aff73130-88e7-4a8b-9b78-9af559e12a71" containerName="registry" containerID="cri-o://960f7c08c0cf3de7396cba8b5ffb2dabf0cb595111009eb4aaa7836f3bf0ee8b" gracePeriod=30 Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.009114 4823 generic.go:334] "Generic (PLEG): container finished" podID="aff73130-88e7-4a8b-9b78-9af559e12a71" containerID="960f7c08c0cf3de7396cba8b5ffb2dabf0cb595111009eb4aaa7836f3bf0ee8b" exitCode=0 Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.009240 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" event={"ID":"aff73130-88e7-4a8b-9b78-9af559e12a71","Type":"ContainerDied","Data":"960f7c08c0cf3de7396cba8b5ffb2dabf0cb595111009eb4aaa7836f3bf0ee8b"} Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.381681 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.494590 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aff73130-88e7-4a8b-9b78-9af559e12a71-registry-certificates\") pod \"aff73130-88e7-4a8b-9b78-9af559e12a71\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.494704 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-bound-sa-token\") pod \"aff73130-88e7-4a8b-9b78-9af559e12a71\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.494787 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aff73130-88e7-4a8b-9b78-9af559e12a71-installation-pull-secrets\") pod \"aff73130-88e7-4a8b-9b78-9af559e12a71\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.494856 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aff73130-88e7-4a8b-9b78-9af559e12a71-ca-trust-extracted\") pod \"aff73130-88e7-4a8b-9b78-9af559e12a71\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.494916 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-registry-tls\") pod \"aff73130-88e7-4a8b-9b78-9af559e12a71\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.495142 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"aff73130-88e7-4a8b-9b78-9af559e12a71\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.495228 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aff73130-88e7-4a8b-9b78-9af559e12a71-trusted-ca\") pod \"aff73130-88e7-4a8b-9b78-9af559e12a71\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.495297 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-926mt\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-kube-api-access-926mt\") pod \"aff73130-88e7-4a8b-9b78-9af559e12a71\" (UID: \"aff73130-88e7-4a8b-9b78-9af559e12a71\") " Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.495978 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff73130-88e7-4a8b-9b78-9af559e12a71-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "aff73130-88e7-4a8b-9b78-9af559e12a71" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.496724 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff73130-88e7-4a8b-9b78-9af559e12a71-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "aff73130-88e7-4a8b-9b78-9af559e12a71" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.496834 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aff73130-88e7-4a8b-9b78-9af559e12a71-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.502840 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "aff73130-88e7-4a8b-9b78-9af559e12a71" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.503236 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-kube-api-access-926mt" (OuterVolumeSpecName: "kube-api-access-926mt") pod "aff73130-88e7-4a8b-9b78-9af559e12a71" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71"). InnerVolumeSpecName "kube-api-access-926mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.503829 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff73130-88e7-4a8b-9b78-9af559e12a71-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "aff73130-88e7-4a8b-9b78-9af559e12a71" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.506824 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "aff73130-88e7-4a8b-9b78-9af559e12a71" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.519852 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "aff73130-88e7-4a8b-9b78-9af559e12a71" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.536559 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aff73130-88e7-4a8b-9b78-9af559e12a71-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "aff73130-88e7-4a8b-9b78-9af559e12a71" (UID: "aff73130-88e7-4a8b-9b78-9af559e12a71"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.598082 4823 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.598123 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-926mt\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-kube-api-access-926mt\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.598138 4823 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/aff73130-88e7-4a8b-9b78-9af559e12a71-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.598153 4823 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aff73130-88e7-4a8b-9b78-9af559e12a71-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.598168 4823 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/aff73130-88e7-4a8b-9b78-9af559e12a71-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:47 crc kubenswrapper[4823]: I0126 14:53:47.598180 4823 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/aff73130-88e7-4a8b-9b78-9af559e12a71-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 26 14:53:48 crc kubenswrapper[4823]: I0126 14:53:48.038173 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" event={"ID":"aff73130-88e7-4a8b-9b78-9af559e12a71","Type":"ContainerDied","Data":"936b81c3ce49f8c7885f66cc56b14a19d49250ed689497a605634550655a5fe4"} Jan 26 14:53:48 crc kubenswrapper[4823]: I0126 14:53:48.038409 4823 scope.go:117] "RemoveContainer" containerID="960f7c08c0cf3de7396cba8b5ffb2dabf0cb595111009eb4aaa7836f3bf0ee8b" Jan 26 14:53:48 crc kubenswrapper[4823]: I0126 14:53:48.039025 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-pbvlk" Jan 26 14:53:48 crc kubenswrapper[4823]: I0126 14:53:48.065844 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pbvlk"] Jan 26 14:53:48 crc kubenswrapper[4823]: I0126 14:53:48.069738 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-pbvlk"] Jan 26 14:53:49 crc kubenswrapper[4823]: I0126 14:53:49.568442 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff73130-88e7-4a8b-9b78-9af559e12a71" path="/var/lib/kubelet/pods/aff73130-88e7-4a8b-9b78-9af559e12a71/volumes" Jan 26 14:54:04 crc kubenswrapper[4823]: I0126 14:54:04.508423 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:54:04 crc kubenswrapper[4823]: I0126 14:54:04.509300 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:54:04 crc kubenswrapper[4823]: I0126 14:54:04.509416 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:54:04 crc kubenswrapper[4823]: I0126 14:54:04.510642 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1ac15f349ba1ced1b4d92c4c521df0c2d2acf5310bb08adcb7f5c409967c6a5"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:54:04 crc kubenswrapper[4823]: I0126 14:54:04.510775 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://c1ac15f349ba1ced1b4d92c4c521df0c2d2acf5310bb08adcb7f5c409967c6a5" gracePeriod=600 Jan 26 14:54:05 crc kubenswrapper[4823]: I0126 14:54:05.155064 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="c1ac15f349ba1ced1b4d92c4c521df0c2d2acf5310bb08adcb7f5c409967c6a5" exitCode=0 Jan 26 14:54:05 crc kubenswrapper[4823]: I0126 14:54:05.155110 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"c1ac15f349ba1ced1b4d92c4c521df0c2d2acf5310bb08adcb7f5c409967c6a5"} Jan 26 14:54:05 crc kubenswrapper[4823]: I0126 14:54:05.155155 4823 scope.go:117] "RemoveContainer" containerID="60f33cf1b9a54abbb41b455105d2780ceea2d67f225bfbdb0d78e5a874c7c04e" Jan 26 14:54:06 crc kubenswrapper[4823]: I0126 14:54:06.165491 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"6060074daac6a743fd06d9a3f457f73f68b68b6876078e35864c532ab12df1fb"} Jan 26 14:56:34 crc kubenswrapper[4823]: I0126 14:56:34.508835 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:56:34 crc kubenswrapper[4823]: I0126 14:56:34.509814 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:57:04 crc kubenswrapper[4823]: I0126 14:57:04.508627 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:57:04 crc kubenswrapper[4823]: I0126 14:57:04.509874 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:57:34 crc kubenswrapper[4823]: I0126 14:57:34.509209 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:57:34 crc kubenswrapper[4823]: I0126 14:57:34.510302 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:57:34 crc kubenswrapper[4823]: I0126 14:57:34.510433 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 14:57:34 crc kubenswrapper[4823]: I0126 14:57:34.511728 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6060074daac6a743fd06d9a3f457f73f68b68b6876078e35864c532ab12df1fb"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 14:57:34 crc kubenswrapper[4823]: I0126 14:57:34.511842 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://6060074daac6a743fd06d9a3f457f73f68b68b6876078e35864c532ab12df1fb" gracePeriod=600 Jan 26 14:57:35 crc kubenswrapper[4823]: I0126 14:57:35.539598 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="6060074daac6a743fd06d9a3f457f73f68b68b6876078e35864c532ab12df1fb" exitCode=0 Jan 26 14:57:35 crc kubenswrapper[4823]: I0126 14:57:35.539702 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"6060074daac6a743fd06d9a3f457f73f68b68b6876078e35864c532ab12df1fb"} Jan 26 14:57:35 crc kubenswrapper[4823]: I0126 14:57:35.540688 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"ec7366454a163fe04376dec76b6dddb0ad3a342a392aad185b53b45a854cd90d"} Jan 26 14:57:35 crc kubenswrapper[4823]: I0126 14:57:35.540721 4823 scope.go:117] "RemoveContainer" containerID="c1ac15f349ba1ced1b4d92c4c521df0c2d2acf5310bb08adcb7f5c409967c6a5" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.824741 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-85nz2"] Jan 26 14:58:38 crc kubenswrapper[4823]: E0126 14:58:38.825492 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff73130-88e7-4a8b-9b78-9af559e12a71" containerName="registry" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.825508 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff73130-88e7-4a8b-9b78-9af559e12a71" containerName="registry" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.825653 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff73130-88e7-4a8b-9b78-9af559e12a71" containerName="registry" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.826145 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-85nz2" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.828689 4823 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-rqrwd" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.828887 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.829080 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.836762 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-75hjq"] Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.837659 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-75hjq" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.841628 4823 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-sgtbq" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.865598 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-75hjq"] Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.866517 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-85nz2"] Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.897477 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqdgq\" (UniqueName: \"kubernetes.io/projected/2a5fd3e7-f2f4-484f-9d4b-24e596ed7502-kube-api-access-gqdgq\") pod \"cert-manager-858654f9db-75hjq\" (UID: \"2a5fd3e7-f2f4-484f-9d4b-24e596ed7502\") " pod="cert-manager/cert-manager-858654f9db-75hjq" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.901124 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rdkmv"] Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.903453 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rdkmv" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.907648 4823 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wcbqm" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.923823 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rdkmv"] Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.999278 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9stb\" (UniqueName: \"kubernetes.io/projected/ba4d7c7f-36e3-4beb-b4b5-ad3e1cf5542b-kube-api-access-n9stb\") pod \"cert-manager-cainjector-cf98fcc89-85nz2\" (UID: \"ba4d7c7f-36e3-4beb-b4b5-ad3e1cf5542b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-85nz2" Jan 26 14:58:38 crc kubenswrapper[4823]: I0126 14:58:38.999494 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqdgq\" (UniqueName: \"kubernetes.io/projected/2a5fd3e7-f2f4-484f-9d4b-24e596ed7502-kube-api-access-gqdgq\") pod \"cert-manager-858654f9db-75hjq\" (UID: \"2a5fd3e7-f2f4-484f-9d4b-24e596ed7502\") " pod="cert-manager/cert-manager-858654f9db-75hjq" Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.022991 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqdgq\" (UniqueName: \"kubernetes.io/projected/2a5fd3e7-f2f4-484f-9d4b-24e596ed7502-kube-api-access-gqdgq\") pod \"cert-manager-858654f9db-75hjq\" (UID: \"2a5fd3e7-f2f4-484f-9d4b-24e596ed7502\") " pod="cert-manager/cert-manager-858654f9db-75hjq" Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.101558 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b4n6\" (UniqueName: \"kubernetes.io/projected/f48d6b9e-8425-4718-a920-2d2ca2bc5104-kube-api-access-2b4n6\") pod \"cert-manager-webhook-687f57d79b-rdkmv\" (UID: \"f48d6b9e-8425-4718-a920-2d2ca2bc5104\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rdkmv" Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.102130 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9stb\" (UniqueName: \"kubernetes.io/projected/ba4d7c7f-36e3-4beb-b4b5-ad3e1cf5542b-kube-api-access-n9stb\") pod \"cert-manager-cainjector-cf98fcc89-85nz2\" (UID: \"ba4d7c7f-36e3-4beb-b4b5-ad3e1cf5542b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-85nz2" Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.119324 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9stb\" (UniqueName: \"kubernetes.io/projected/ba4d7c7f-36e3-4beb-b4b5-ad3e1cf5542b-kube-api-access-n9stb\") pod \"cert-manager-cainjector-cf98fcc89-85nz2\" (UID: \"ba4d7c7f-36e3-4beb-b4b5-ad3e1cf5542b\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-85nz2" Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.149937 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-85nz2" Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.158717 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-75hjq" Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.207329 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b4n6\" (UniqueName: \"kubernetes.io/projected/f48d6b9e-8425-4718-a920-2d2ca2bc5104-kube-api-access-2b4n6\") pod \"cert-manager-webhook-687f57d79b-rdkmv\" (UID: \"f48d6b9e-8425-4718-a920-2d2ca2bc5104\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rdkmv" Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.228644 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b4n6\" (UniqueName: \"kubernetes.io/projected/f48d6b9e-8425-4718-a920-2d2ca2bc5104-kube-api-access-2b4n6\") pod \"cert-manager-webhook-687f57d79b-rdkmv\" (UID: \"f48d6b9e-8425-4718-a920-2d2ca2bc5104\") " pod="cert-manager/cert-manager-webhook-687f57d79b-rdkmv" Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.464289 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-85nz2"] Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.472926 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.528751 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-rdkmv" Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.645141 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-75hjq"] Jan 26 14:58:39 crc kubenswrapper[4823]: W0126 14:58:39.655648 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a5fd3e7_f2f4_484f_9d4b_24e596ed7502.slice/crio-ad030c8fd525849f856dd4a6a7427373c428a6973bbdd2dee767b2f1e80d190b WatchSource:0}: Error finding container ad030c8fd525849f856dd4a6a7427373c428a6973bbdd2dee767b2f1e80d190b: Status 404 returned error can't find the container with id ad030c8fd525849f856dd4a6a7427373c428a6973bbdd2dee767b2f1e80d190b Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.759128 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-rdkmv"] Jan 26 14:58:39 crc kubenswrapper[4823]: W0126 14:58:39.765137 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf48d6b9e_8425_4718_a920_2d2ca2bc5104.slice/crio-6375307f39a247e2c03593026e122f85f285d097d3316b9e84757508b42ede1a WatchSource:0}: Error finding container 6375307f39a247e2c03593026e122f85f285d097d3316b9e84757508b42ede1a: Status 404 returned error can't find the container with id 6375307f39a247e2c03593026e122f85f285d097d3316b9e84757508b42ede1a Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.960439 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-75hjq" event={"ID":"2a5fd3e7-f2f4-484f-9d4b-24e596ed7502","Type":"ContainerStarted","Data":"ad030c8fd525849f856dd4a6a7427373c428a6973bbdd2dee767b2f1e80d190b"} Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.961743 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-85nz2" event={"ID":"ba4d7c7f-36e3-4beb-b4b5-ad3e1cf5542b","Type":"ContainerStarted","Data":"aa89a85d23229fe254373a48f7b05e88c1849b29863fde073e7156968624c4dc"} Jan 26 14:58:39 crc kubenswrapper[4823]: I0126 14:58:39.962721 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rdkmv" event={"ID":"f48d6b9e-8425-4718-a920-2d2ca2bc5104","Type":"ContainerStarted","Data":"6375307f39a247e2c03593026e122f85f285d097d3316b9e84757508b42ede1a"} Jan 26 14:58:47 crc kubenswrapper[4823]: I0126 14:58:47.007454 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-85nz2" event={"ID":"ba4d7c7f-36e3-4beb-b4b5-ad3e1cf5542b","Type":"ContainerStarted","Data":"f9eeb5e36ac7b2fa5b2dcd231e90d8e4f4a6e1c1a3ae968c660eb8315eb4ddbe"} Jan 26 14:58:47 crc kubenswrapper[4823]: I0126 14:58:47.029585 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-85nz2" podStartSLOduration=2.629181297 podStartE2EDuration="9.029559226s" podCreationTimestamp="2026-01-26 14:58:38 +0000 UTC" firstStartedPulling="2026-01-26 14:58:39.47264542 +0000 UTC m=+716.158108535" lastFinishedPulling="2026-01-26 14:58:45.873023359 +0000 UTC m=+722.558486464" observedRunningTime="2026-01-26 14:58:47.022154908 +0000 UTC m=+723.707618013" watchObservedRunningTime="2026-01-26 14:58:47.029559226 +0000 UTC m=+723.715022341" Jan 26 14:58:48 crc kubenswrapper[4823]: I0126 14:58:48.016040 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-rdkmv" event={"ID":"f48d6b9e-8425-4718-a920-2d2ca2bc5104","Type":"ContainerStarted","Data":"d8c1466a3de694248fadd60808caf934f2b75db1676a92cb58559c84b23cb8a4"} Jan 26 14:58:48 crc kubenswrapper[4823]: I0126 14:58:48.016133 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-rdkmv" Jan 26 14:58:48 crc kubenswrapper[4823]: I0126 14:58:48.018967 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-75hjq" event={"ID":"2a5fd3e7-f2f4-484f-9d4b-24e596ed7502","Type":"ContainerStarted","Data":"362e6d79251857957597032b918d30a02c4c09ae6f9684221dc9989e5b029394"} Jan 26 14:58:48 crc kubenswrapper[4823]: I0126 14:58:48.040335 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-rdkmv" podStartSLOduration=2.175702169 podStartE2EDuration="10.040313123s" podCreationTimestamp="2026-01-26 14:58:38 +0000 UTC" firstStartedPulling="2026-01-26 14:58:39.767100765 +0000 UTC m=+716.452563870" lastFinishedPulling="2026-01-26 14:58:47.631711719 +0000 UTC m=+724.317174824" observedRunningTime="2026-01-26 14:58:48.03271768 +0000 UTC m=+724.718180795" watchObservedRunningTime="2026-01-26 14:58:48.040313123 +0000 UTC m=+724.725776228" Jan 26 14:58:48 crc kubenswrapper[4823]: I0126 14:58:48.061848 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-75hjq" podStartSLOduration=2.083859697 podStartE2EDuration="10.061813059s" podCreationTimestamp="2026-01-26 14:58:38 +0000 UTC" firstStartedPulling="2026-01-26 14:58:39.657833296 +0000 UTC m=+716.343296391" lastFinishedPulling="2026-01-26 14:58:47.635786648 +0000 UTC m=+724.321249753" observedRunningTime="2026-01-26 14:58:48.053742553 +0000 UTC m=+724.739205688" watchObservedRunningTime="2026-01-26 14:58:48.061813059 +0000 UTC m=+724.747276214" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.242172 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kpz7g"] Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.243057 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovn-controller" containerID="cri-o://ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9" gracePeriod=30 Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.243092 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="northd" containerID="cri-o://a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b" gracePeriod=30 Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.243183 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="nbdb" containerID="cri-o://d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab" gracePeriod=30 Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.243222 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30" gracePeriod=30 Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.243274 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="kube-rbac-proxy-node" containerID="cri-o://0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f" gracePeriod=30 Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.243306 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovn-acl-logging" containerID="cri-o://7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2" gracePeriod=30 Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.243277 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="sbdb" containerID="cri-o://b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6" gracePeriod=30 Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.286027 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" containerID="cri-o://49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08" gracePeriod=30 Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.589426 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/3.log" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.592466 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovn-acl-logging/0.log" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.593238 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovn-controller/0.log" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.593728 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.658843 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-j28b7"] Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659128 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659149 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659165 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="kubecfg-setup" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659173 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="kubecfg-setup" Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659186 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovn-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659194 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovn-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659202 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659210 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659222 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659230 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659239 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="kube-rbac-proxy-node" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659246 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="kube-rbac-proxy-node" Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659261 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="northd" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659271 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="northd" Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659282 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovn-acl-logging" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659290 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovn-acl-logging" Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659298 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659305 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659315 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659322 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659334 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659342 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659351 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="nbdb" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659373 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="nbdb" Jan 26 14:58:51 crc kubenswrapper[4823]: E0126 14:58:51.659384 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="sbdb" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659391 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="sbdb" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659500 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="kube-rbac-proxy-node" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659515 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="nbdb" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659523 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="northd" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659531 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659540 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovn-acl-logging" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659549 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659562 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="kube-rbac-proxy-ovn-metrics" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659570 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="sbdb" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659581 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659591 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovn-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659601 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.659834 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerName="ovnkube-controller" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.661847 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.697738 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-openvswitch\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.697828 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-kubelet\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.697860 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-run-ovn-kubernetes\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.697894 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-systemd\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.697990 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-var-lib-openvswitch\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.698032 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-cni-bin\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.698069 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf2sr\" (UniqueName: \"kubernetes.io/projected/232a66a2-55bb-44f6-81a0-383432fbf1d5-kube-api-access-nf2sr\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.697969 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.697964 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.698110 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.698100 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.698130 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.697878 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.698095 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-etc-openvswitch\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.698959 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-run-netns\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.698983 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-node-log\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699012 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovn-node-metrics-cert\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699043 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-ovn\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699093 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-slash\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699114 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovnkube-script-lib\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699133 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovnkube-config\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699150 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-systemd-units\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699170 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-cni-netd\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699189 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-log-socket\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699216 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-env-overrides\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699238 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"232a66a2-55bb-44f6-81a0-383432fbf1d5\" (UID: \"232a66a2-55bb-44f6-81a0-383432fbf1d5\") " Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699489 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-run-ovn-kubernetes\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699519 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-cni-netd\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699563 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-kubelet\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699591 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/383b6645-4d25-4cf5-bb28-f5d707b58169-ovnkube-config\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699662 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/383b6645-4d25-4cf5-bb28-f5d707b58169-ovn-node-metrics-cert\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699686 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/383b6645-4d25-4cf5-bb28-f5d707b58169-env-overrides\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699718 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-cni-bin\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699740 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-log-socket\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699769 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65xw2\" (UniqueName: \"kubernetes.io/projected/383b6645-4d25-4cf5-bb28-f5d707b58169-kube-api-access-65xw2\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699803 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-var-lib-openvswitch\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699861 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699922 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-run-netns\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699948 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/383b6645-4d25-4cf5-bb28-f5d707b58169-ovnkube-script-lib\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699981 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-slash\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700024 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-node-log\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700053 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-run-ovn\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700088 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-run-openvswitch\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700115 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-systemd-units\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700155 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-etc-openvswitch\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700183 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-run-systemd\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.699082 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700169 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700212 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700281 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700233 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-slash" (OuterVolumeSpecName: "host-slash") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700471 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700616 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700653 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-log-socket" (OuterVolumeSpecName: "log-socket") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700243 4823 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700691 4823 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700705 4823 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700719 4823 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700732 4823 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700744 4823 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700776 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700797 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-node-log" (OuterVolumeSpecName: "node-log") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.700830 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.703647 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/232a66a2-55bb-44f6-81a0-383432fbf1d5-kube-api-access-nf2sr" (OuterVolumeSpecName: "kube-api-access-nf2sr") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "kube-api-access-nf2sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.703739 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.711674 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "232a66a2-55bb-44f6-81a0-383432fbf1d5" (UID: "232a66a2-55bb-44f6-81a0-383432fbf1d5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.801949 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/383b6645-4d25-4cf5-bb28-f5d707b58169-ovn-node-metrics-cert\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.801996 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/383b6645-4d25-4cf5-bb28-f5d707b58169-env-overrides\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802020 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-cni-bin\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802038 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-log-socket\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802058 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65xw2\" (UniqueName: \"kubernetes.io/projected/383b6645-4d25-4cf5-bb28-f5d707b58169-kube-api-access-65xw2\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802076 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-var-lib-openvswitch\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802098 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802127 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/383b6645-4d25-4cf5-bb28-f5d707b58169-ovnkube-script-lib\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802145 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-run-netns\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802169 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-slash\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802196 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-node-log\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802216 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-run-ovn\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802239 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-run-openvswitch\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802266 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-systemd-units\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802289 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-etc-openvswitch\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802306 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-run-systemd\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-run-ovn-kubernetes\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802341 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-cni-netd\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802384 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-kubelet\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802411 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/383b6645-4d25-4cf5-bb28-f5d707b58169-ovnkube-config\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802484 4823 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-slash\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802496 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802507 4823 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802516 4823 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802525 4823 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802534 4823 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-log-socket\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802544 4823 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802554 4823 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/232a66a2-55bb-44f6-81a0-383432fbf1d5-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802566 4823 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802575 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf2sr\" (UniqueName: \"kubernetes.io/projected/232a66a2-55bb-44f6-81a0-383432fbf1d5-kube-api-access-nf2sr\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802587 4823 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-node-log\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802595 4823 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802603 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/232a66a2-55bb-44f6-81a0-383432fbf1d5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.802615 4823 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/232a66a2-55bb-44f6-81a0-383432fbf1d5-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.803483 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/383b6645-4d25-4cf5-bb28-f5d707b58169-ovnkube-config\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804081 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-slash\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804135 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-run-ovn-kubernetes\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804089 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-run-ovn\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804189 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-systemd-units\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804115 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-etc-openvswitch\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804162 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-run-openvswitch\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804130 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-node-log\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804240 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-run-systemd\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804267 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-cni-netd\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804322 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-kubelet\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804326 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-var-lib-openvswitch\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804387 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-cni-bin\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804408 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-run-netns\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804473 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.804519 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/383b6645-4d25-4cf5-bb28-f5d707b58169-log-socket\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.805324 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/383b6645-4d25-4cf5-bb28-f5d707b58169-ovnkube-script-lib\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.805335 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/383b6645-4d25-4cf5-bb28-f5d707b58169-env-overrides\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.808472 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/383b6645-4d25-4cf5-bb28-f5d707b58169-ovn-node-metrics-cert\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.824458 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65xw2\" (UniqueName: \"kubernetes.io/projected/383b6645-4d25-4cf5-bb28-f5d707b58169-kube-api-access-65xw2\") pod \"ovnkube-node-j28b7\" (UID: \"383b6645-4d25-4cf5-bb28-f5d707b58169\") " pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:51 crc kubenswrapper[4823]: I0126 14:58:51.982439 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:52 crc kubenswrapper[4823]: W0126 14:58:52.009604 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod383b6645_4d25_4cf5_bb28_f5d707b58169.slice/crio-1d9081bebb89f70f700b4e57d21e03ddc2f953335f4d4c362258edd7484a1e5d WatchSource:0}: Error finding container 1d9081bebb89f70f700b4e57d21e03ddc2f953335f4d4c362258edd7484a1e5d: Status 404 returned error can't find the container with id 1d9081bebb89f70f700b4e57d21e03ddc2f953335f4d4c362258edd7484a1e5d Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.048146 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovnkube-controller/3.log" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.052553 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovn-acl-logging/0.log" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053124 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-kpz7g_232a66a2-55bb-44f6-81a0-383432fbf1d5/ovn-controller/0.log" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053652 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08" exitCode=0 Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053711 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6" exitCode=0 Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053724 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab" exitCode=0 Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053737 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b" exitCode=0 Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053752 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30" exitCode=0 Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053768 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f" exitCode=0 Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053778 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2" exitCode=143 Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053790 4823 generic.go:334] "Generic (PLEG): container finished" podID="232a66a2-55bb-44f6-81a0-383432fbf1d5" containerID="ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9" exitCode=143 Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053783 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053812 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053858 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053876 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.053916 4823 scope.go:117] "RemoveContainer" containerID="49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054083 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054103 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054135 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054149 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054164 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054171 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054180 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054187 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054194 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054200 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054207 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054215 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054224 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054237 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054247 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054255 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054262 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054270 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054279 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054286 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054294 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054300 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.054984 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055007 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055021 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055031 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055039 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055045 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055054 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055060 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055067 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055073 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055079 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055086 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055094 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kpz7g" event={"ID":"232a66a2-55bb-44f6-81a0-383432fbf1d5","Type":"ContainerDied","Data":"3b73cb140c99dc12c7aa1208e30aa51e378d517b478ebb3be9d4a2d3f7717c83"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055108 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055116 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055124 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055131 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055137 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055144 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055151 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055157 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055164 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.055171 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.058161 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p555f_6e7853ce-0557-452f-b7ae-cc549bf8e2ae/kube-multus/2.log" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.059029 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p555f_6e7853ce-0557-452f-b7ae-cc549bf8e2ae/kube-multus/1.log" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.059099 4823 generic.go:334] "Generic (PLEG): container finished" podID="6e7853ce-0557-452f-b7ae-cc549bf8e2ae" containerID="25e57a64a9bcd0d85710f61af7e99512530bf816f608ba70b91b03589278eb4f" exitCode=2 Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.059160 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p555f" event={"ID":"6e7853ce-0557-452f-b7ae-cc549bf8e2ae","Type":"ContainerDied","Data":"25e57a64a9bcd0d85710f61af7e99512530bf816f608ba70b91b03589278eb4f"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.059191 4823 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d9997e1c384fff7560bd4f45dcbc44a289ddc562c7c9784cda8b253e6d0d060"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.059773 4823 scope.go:117] "RemoveContainer" containerID="25e57a64a9bcd0d85710f61af7e99512530bf816f608ba70b91b03589278eb4f" Jan 26 14:58:52 crc kubenswrapper[4823]: E0126 14:58:52.059947 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-p555f_openshift-multus(6e7853ce-0557-452f-b7ae-cc549bf8e2ae)\"" pod="openshift-multus/multus-p555f" podUID="6e7853ce-0557-452f-b7ae-cc549bf8e2ae" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.062310 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" event={"ID":"383b6645-4d25-4cf5-bb28-f5d707b58169","Type":"ContainerStarted","Data":"1d9081bebb89f70f700b4e57d21e03ddc2f953335f4d4c362258edd7484a1e5d"} Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.083839 4823 scope.go:117] "RemoveContainer" containerID="f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.108410 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kpz7g"] Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.115865 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kpz7g"] Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.135354 4823 scope.go:117] "RemoveContainer" containerID="b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.153772 4823 scope.go:117] "RemoveContainer" containerID="d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.224631 4823 scope.go:117] "RemoveContainer" containerID="a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.242931 4823 scope.go:117] "RemoveContainer" containerID="63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.259394 4823 scope.go:117] "RemoveContainer" containerID="0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.277273 4823 scope.go:117] "RemoveContainer" containerID="7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.296493 4823 scope.go:117] "RemoveContainer" containerID="ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.312337 4823 scope.go:117] "RemoveContainer" containerID="ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.336158 4823 scope.go:117] "RemoveContainer" containerID="49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08" Jan 26 14:58:52 crc kubenswrapper[4823]: E0126 14:58:52.336894 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08\": container with ID starting with 49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08 not found: ID does not exist" containerID="49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.336938 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08"} err="failed to get container status \"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08\": rpc error: code = NotFound desc = could not find container \"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08\": container with ID starting with 49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.336971 4823 scope.go:117] "RemoveContainer" containerID="f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd" Jan 26 14:58:52 crc kubenswrapper[4823]: E0126 14:58:52.337523 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\": container with ID starting with f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd not found: ID does not exist" containerID="f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.337576 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd"} err="failed to get container status \"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\": rpc error: code = NotFound desc = could not find container \"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\": container with ID starting with f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.337613 4823 scope.go:117] "RemoveContainer" containerID="b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6" Jan 26 14:58:52 crc kubenswrapper[4823]: E0126 14:58:52.338278 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\": container with ID starting with b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6 not found: ID does not exist" containerID="b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.338355 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6"} err="failed to get container status \"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\": rpc error: code = NotFound desc = could not find container \"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\": container with ID starting with b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.338417 4823 scope.go:117] "RemoveContainer" containerID="d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab" Jan 26 14:58:52 crc kubenswrapper[4823]: E0126 14:58:52.339028 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\": container with ID starting with d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab not found: ID does not exist" containerID="d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.339078 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab"} err="failed to get container status \"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\": rpc error: code = NotFound desc = could not find container \"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\": container with ID starting with d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.339106 4823 scope.go:117] "RemoveContainer" containerID="a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b" Jan 26 14:58:52 crc kubenswrapper[4823]: E0126 14:58:52.341544 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\": container with ID starting with a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b not found: ID does not exist" containerID="a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.341580 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b"} err="failed to get container status \"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\": rpc error: code = NotFound desc = could not find container \"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\": container with ID starting with a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.341596 4823 scope.go:117] "RemoveContainer" containerID="63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30" Jan 26 14:58:52 crc kubenswrapper[4823]: E0126 14:58:52.342082 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\": container with ID starting with 63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30 not found: ID does not exist" containerID="63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.342106 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30"} err="failed to get container status \"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\": rpc error: code = NotFound desc = could not find container \"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\": container with ID starting with 63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.342118 4823 scope.go:117] "RemoveContainer" containerID="0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f" Jan 26 14:58:52 crc kubenswrapper[4823]: E0126 14:58:52.343301 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\": container with ID starting with 0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f not found: ID does not exist" containerID="0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.343325 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f"} err="failed to get container status \"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\": rpc error: code = NotFound desc = could not find container \"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\": container with ID starting with 0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.343343 4823 scope.go:117] "RemoveContainer" containerID="7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2" Jan 26 14:58:52 crc kubenswrapper[4823]: E0126 14:58:52.343897 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\": container with ID starting with 7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2 not found: ID does not exist" containerID="7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.343940 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2"} err="failed to get container status \"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\": rpc error: code = NotFound desc = could not find container \"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\": container with ID starting with 7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.343968 4823 scope.go:117] "RemoveContainer" containerID="ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9" Jan 26 14:58:52 crc kubenswrapper[4823]: E0126 14:58:52.344320 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\": container with ID starting with ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9 not found: ID does not exist" containerID="ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.344350 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9"} err="failed to get container status \"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\": rpc error: code = NotFound desc = could not find container \"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\": container with ID starting with ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.344399 4823 scope.go:117] "RemoveContainer" containerID="ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332" Jan 26 14:58:52 crc kubenswrapper[4823]: E0126 14:58:52.344918 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\": container with ID starting with ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332 not found: ID does not exist" containerID="ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.344963 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332"} err="failed to get container status \"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\": rpc error: code = NotFound desc = could not find container \"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\": container with ID starting with ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.344990 4823 scope.go:117] "RemoveContainer" containerID="49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.345334 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08"} err="failed to get container status \"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08\": rpc error: code = NotFound desc = could not find container \"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08\": container with ID starting with 49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.345379 4823 scope.go:117] "RemoveContainer" containerID="f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.345723 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd"} err="failed to get container status \"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\": rpc error: code = NotFound desc = could not find container \"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\": container with ID starting with f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.345756 4823 scope.go:117] "RemoveContainer" containerID="b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.346660 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6"} err="failed to get container status \"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\": rpc error: code = NotFound desc = could not find container \"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\": container with ID starting with b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.346687 4823 scope.go:117] "RemoveContainer" containerID="d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.347097 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab"} err="failed to get container status \"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\": rpc error: code = NotFound desc = could not find container \"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\": container with ID starting with d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.347183 4823 scope.go:117] "RemoveContainer" containerID="a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.347700 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b"} err="failed to get container status \"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\": rpc error: code = NotFound desc = could not find container \"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\": container with ID starting with a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.347739 4823 scope.go:117] "RemoveContainer" containerID="63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.348130 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30"} err="failed to get container status \"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\": rpc error: code = NotFound desc = could not find container \"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\": container with ID starting with 63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.348165 4823 scope.go:117] "RemoveContainer" containerID="0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.348788 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f"} err="failed to get container status \"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\": rpc error: code = NotFound desc = could not find container \"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\": container with ID starting with 0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.348842 4823 scope.go:117] "RemoveContainer" containerID="7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.349501 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2"} err="failed to get container status \"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\": rpc error: code = NotFound desc = could not find container \"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\": container with ID starting with 7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.349573 4823 scope.go:117] "RemoveContainer" containerID="ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.350679 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9"} err="failed to get container status \"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\": rpc error: code = NotFound desc = could not find container \"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\": container with ID starting with ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.350762 4823 scope.go:117] "RemoveContainer" containerID="ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.351275 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332"} err="failed to get container status \"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\": rpc error: code = NotFound desc = could not find container \"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\": container with ID starting with ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.351300 4823 scope.go:117] "RemoveContainer" containerID="49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.351938 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08"} err="failed to get container status \"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08\": rpc error: code = NotFound desc = could not find container \"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08\": container with ID starting with 49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.351971 4823 scope.go:117] "RemoveContainer" containerID="f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.352509 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd"} err="failed to get container status \"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\": rpc error: code = NotFound desc = could not find container \"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\": container with ID starting with f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.352542 4823 scope.go:117] "RemoveContainer" containerID="b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.352964 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6"} err="failed to get container status \"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\": rpc error: code = NotFound desc = could not find container \"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\": container with ID starting with b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.352987 4823 scope.go:117] "RemoveContainer" containerID="d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.353301 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab"} err="failed to get container status \"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\": rpc error: code = NotFound desc = could not find container \"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\": container with ID starting with d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.353323 4823 scope.go:117] "RemoveContainer" containerID="a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.353769 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b"} err="failed to get container status \"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\": rpc error: code = NotFound desc = could not find container \"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\": container with ID starting with a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.353798 4823 scope.go:117] "RemoveContainer" containerID="63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.354098 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30"} err="failed to get container status \"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\": rpc error: code = NotFound desc = could not find container \"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\": container with ID starting with 63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.354123 4823 scope.go:117] "RemoveContainer" containerID="0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.355039 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f"} err="failed to get container status \"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\": rpc error: code = NotFound desc = could not find container \"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\": container with ID starting with 0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.355102 4823 scope.go:117] "RemoveContainer" containerID="7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.355560 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2"} err="failed to get container status \"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\": rpc error: code = NotFound desc = could not find container \"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\": container with ID starting with 7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.355591 4823 scope.go:117] "RemoveContainer" containerID="ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.355923 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9"} err="failed to get container status \"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\": rpc error: code = NotFound desc = could not find container \"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\": container with ID starting with ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.355958 4823 scope.go:117] "RemoveContainer" containerID="ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.356302 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332"} err="failed to get container status \"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\": rpc error: code = NotFound desc = could not find container \"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\": container with ID starting with ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.356331 4823 scope.go:117] "RemoveContainer" containerID="49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.356800 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08"} err="failed to get container status \"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08\": rpc error: code = NotFound desc = could not find container \"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08\": container with ID starting with 49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.356826 4823 scope.go:117] "RemoveContainer" containerID="f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.357227 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd"} err="failed to get container status \"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\": rpc error: code = NotFound desc = could not find container \"f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd\": container with ID starting with f55b6c4d988bf17851c75c93b976c93bb592cf84dc27897fb519db2c828d65dd not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.357300 4823 scope.go:117] "RemoveContainer" containerID="b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.357715 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6"} err="failed to get container status \"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\": rpc error: code = NotFound desc = could not find container \"b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6\": container with ID starting with b52e26194ef22c4b5d05b92ca7d6b13aa35dd4c7345d406ff3c2a3f9d0a983b6 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.357792 4823 scope.go:117] "RemoveContainer" containerID="d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.358123 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab"} err="failed to get container status \"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\": rpc error: code = NotFound desc = could not find container \"d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab\": container with ID starting with d95d73a4b76421527b844c515e45e23ea7a80560c3c19aa19d62763f9f2bb1ab not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.358152 4823 scope.go:117] "RemoveContainer" containerID="a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.358556 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b"} err="failed to get container status \"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\": rpc error: code = NotFound desc = could not find container \"a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b\": container with ID starting with a85a2f568223592b3c207a26f76da3ffef7a7e474f30928e06a12d76a185fb2b not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.358603 4823 scope.go:117] "RemoveContainer" containerID="63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.358969 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30"} err="failed to get container status \"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\": rpc error: code = NotFound desc = could not find container \"63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30\": container with ID starting with 63f4d99cb34800d1363b90b9f349c053485ea59fcdb4d004b5fc3d5b79b5bd30 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.358998 4823 scope.go:117] "RemoveContainer" containerID="0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.359346 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f"} err="failed to get container status \"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\": rpc error: code = NotFound desc = could not find container \"0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f\": container with ID starting with 0c690ec78824fdfeec27bf615c46b45ea85e21d775199dc7caff2b3fdd4a343f not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.359398 4823 scope.go:117] "RemoveContainer" containerID="7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.359736 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2"} err="failed to get container status \"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\": rpc error: code = NotFound desc = could not find container \"7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2\": container with ID starting with 7fcb92a06cca7cfb531f839551b7f13bb33b5aa39d554170df172cb775d396c2 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.359792 4823 scope.go:117] "RemoveContainer" containerID="ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.360308 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9"} err="failed to get container status \"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\": rpc error: code = NotFound desc = could not find container \"ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9\": container with ID starting with ca4ccc80cad1176d8d6c92161725f8e230e2c05540c685b4d8eef24f785b1cb9 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.360335 4823 scope.go:117] "RemoveContainer" containerID="ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.360770 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332"} err="failed to get container status \"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\": rpc error: code = NotFound desc = could not find container \"ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332\": container with ID starting with ecb4497920c37a1ef88062b2cd7bdf0238f5418d5d7a317ef67a1802767c7332 not found: ID does not exist" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.360822 4823 scope.go:117] "RemoveContainer" containerID="49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08" Jan 26 14:58:52 crc kubenswrapper[4823]: I0126 14:58:52.361180 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08"} err="failed to get container status \"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08\": rpc error: code = NotFound desc = could not find container \"49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08\": container with ID starting with 49961f1600d2dd8860faf48bb45fcb37aba2b6ee02ea72c387b3194013113b08 not found: ID does not exist" Jan 26 14:58:53 crc kubenswrapper[4823]: I0126 14:58:53.069835 4823 generic.go:334] "Generic (PLEG): container finished" podID="383b6645-4d25-4cf5-bb28-f5d707b58169" containerID="765e44c14b852094080820273a7da2ed0734eac590020415dd625029cc9f9578" exitCode=0 Jan 26 14:58:53 crc kubenswrapper[4823]: I0126 14:58:53.069921 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" event={"ID":"383b6645-4d25-4cf5-bb28-f5d707b58169","Type":"ContainerDied","Data":"765e44c14b852094080820273a7da2ed0734eac590020415dd625029cc9f9578"} Jan 26 14:58:53 crc kubenswrapper[4823]: I0126 14:58:53.568381 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="232a66a2-55bb-44f6-81a0-383432fbf1d5" path="/var/lib/kubelet/pods/232a66a2-55bb-44f6-81a0-383432fbf1d5/volumes" Jan 26 14:58:54 crc kubenswrapper[4823]: I0126 14:58:54.081519 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" event={"ID":"383b6645-4d25-4cf5-bb28-f5d707b58169","Type":"ContainerStarted","Data":"8b2ba6edaaf257a64351d395effab341f1f1e78ff12b80031629da3621a2d8d0"} Jan 26 14:58:54 crc kubenswrapper[4823]: I0126 14:58:54.081878 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" event={"ID":"383b6645-4d25-4cf5-bb28-f5d707b58169","Type":"ContainerStarted","Data":"7f654101b82afa91615567ad006321de3cb6575d248b23fadc25c91948840941"} Jan 26 14:58:54 crc kubenswrapper[4823]: I0126 14:58:54.081898 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" event={"ID":"383b6645-4d25-4cf5-bb28-f5d707b58169","Type":"ContainerStarted","Data":"3aee771125b5f3ac724b7d22dd699a21effb6b4361d22ab2d25e250068e7055a"} Jan 26 14:58:54 crc kubenswrapper[4823]: I0126 14:58:54.081913 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" event={"ID":"383b6645-4d25-4cf5-bb28-f5d707b58169","Type":"ContainerStarted","Data":"6a8ed6a4e68621d0a5f67952c0dbc59aca67d8bffdbb6dcc931c67270411c525"} Jan 26 14:58:54 crc kubenswrapper[4823]: I0126 14:58:54.081924 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" event={"ID":"383b6645-4d25-4cf5-bb28-f5d707b58169","Type":"ContainerStarted","Data":"5efba961d63f79970fd803d2d37c9654a14c7aeee6c336df7dcce53d57ce0361"} Jan 26 14:58:54 crc kubenswrapper[4823]: I0126 14:58:54.081935 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" event={"ID":"383b6645-4d25-4cf5-bb28-f5d707b58169","Type":"ContainerStarted","Data":"e860780d698beab805a826d3f6d271858fec1f7f23cfa25c80bf6a31da664474"} Jan 26 14:58:54 crc kubenswrapper[4823]: I0126 14:58:54.532339 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-rdkmv" Jan 26 14:58:57 crc kubenswrapper[4823]: I0126 14:58:57.103530 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" event={"ID":"383b6645-4d25-4cf5-bb28-f5d707b58169","Type":"ContainerStarted","Data":"a06db028e0e1ca03f5d107c69e7f536f92daa15f86da36064f68f86e1bbad116"} Jan 26 14:58:59 crc kubenswrapper[4823]: I0126 14:58:59.120859 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" event={"ID":"383b6645-4d25-4cf5-bb28-f5d707b58169","Type":"ContainerStarted","Data":"2fc07b1e30787bbe16e636cc24d4eb9945894fed601d52d87e82088e66411da0"} Jan 26 14:58:59 crc kubenswrapper[4823]: I0126 14:58:59.121317 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:59 crc kubenswrapper[4823]: I0126 14:58:59.121336 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:59 crc kubenswrapper[4823]: I0126 14:58:59.121348 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:59 crc kubenswrapper[4823]: I0126 14:58:59.151786 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:59 crc kubenswrapper[4823]: I0126 14:58:59.156304 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:58:59 crc kubenswrapper[4823]: I0126 14:58:59.157033 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" podStartSLOduration=8.157018523 podStartE2EDuration="8.157018523s" podCreationTimestamp="2026-01-26 14:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 14:58:59.15316586 +0000 UTC m=+735.838628965" watchObservedRunningTime="2026-01-26 14:58:59.157018523 +0000 UTC m=+735.842481628" Jan 26 14:59:03 crc kubenswrapper[4823]: I0126 14:59:03.563991 4823 scope.go:117] "RemoveContainer" containerID="25e57a64a9bcd0d85710f61af7e99512530bf816f608ba70b91b03589278eb4f" Jan 26 14:59:07 crc kubenswrapper[4823]: I0126 14:59:07.170189 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p555f_6e7853ce-0557-452f-b7ae-cc549bf8e2ae/kube-multus/2.log" Jan 26 14:59:07 crc kubenswrapper[4823]: I0126 14:59:07.170950 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p555f_6e7853ce-0557-452f-b7ae-cc549bf8e2ae/kube-multus/1.log" Jan 26 14:59:07 crc kubenswrapper[4823]: I0126 14:59:07.170989 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p555f" event={"ID":"6e7853ce-0557-452f-b7ae-cc549bf8e2ae","Type":"ContainerStarted","Data":"e29aeb40d8d94d0fe42076542d2918f9d860f93441d61ae4d7935981f7cacaa5"} Jan 26 14:59:22 crc kubenswrapper[4823]: I0126 14:59:22.012425 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-j28b7" Jan 26 14:59:32 crc kubenswrapper[4823]: I0126 14:59:32.832663 4823 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 26 14:59:34 crc kubenswrapper[4823]: I0126 14:59:34.508885 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 14:59:34 crc kubenswrapper[4823]: I0126 14:59:34.509579 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 14:59:43 crc kubenswrapper[4823]: I0126 14:59:43.876691 4823 scope.go:117] "RemoveContainer" containerID="3d9997e1c384fff7560bd4f45dcbc44a289ddc562c7c9784cda8b253e6d0d060" Jan 26 14:59:43 crc kubenswrapper[4823]: I0126 14:59:43.990176 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p555f_6e7853ce-0557-452f-b7ae-cc549bf8e2ae/kube-multus/2.log" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.088924 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm"] Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.090520 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.094292 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.104218 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm"] Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.262091 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e623b4c-fb12-4aa5-a519-0ec22f564425-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm\" (UID: \"4e623b4c-fb12-4aa5-a519-0ec22f564425\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.262174 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e623b4c-fb12-4aa5-a519-0ec22f564425-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm\" (UID: \"4e623b4c-fb12-4aa5-a519-0ec22f564425\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.262239 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7hhh\" (UniqueName: \"kubernetes.io/projected/4e623b4c-fb12-4aa5-a519-0ec22f564425-kube-api-access-v7hhh\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm\" (UID: \"4e623b4c-fb12-4aa5-a519-0ec22f564425\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.363218 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7hhh\" (UniqueName: \"kubernetes.io/projected/4e623b4c-fb12-4aa5-a519-0ec22f564425-kube-api-access-v7hhh\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm\" (UID: \"4e623b4c-fb12-4aa5-a519-0ec22f564425\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.363346 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e623b4c-fb12-4aa5-a519-0ec22f564425-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm\" (UID: \"4e623b4c-fb12-4aa5-a519-0ec22f564425\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.363415 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e623b4c-fb12-4aa5-a519-0ec22f564425-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm\" (UID: \"4e623b4c-fb12-4aa5-a519-0ec22f564425\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.364018 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e623b4c-fb12-4aa5-a519-0ec22f564425-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm\" (UID: \"4e623b4c-fb12-4aa5-a519-0ec22f564425\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.364195 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e623b4c-fb12-4aa5-a519-0ec22f564425-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm\" (UID: \"4e623b4c-fb12-4aa5-a519-0ec22f564425\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.387896 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7hhh\" (UniqueName: \"kubernetes.io/projected/4e623b4c-fb12-4aa5-a519-0ec22f564425-kube-api-access-v7hhh\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm\" (UID: \"4e623b4c-fb12-4aa5-a519-0ec22f564425\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.408279 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:47 crc kubenswrapper[4823]: I0126 14:59:47.630856 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm"] Jan 26 14:59:47 crc kubenswrapper[4823]: W0126 14:59:47.638525 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e623b4c_fb12_4aa5_a519_0ec22f564425.slice/crio-6ab3d144b558e6e0bc662e8208f8e16314565a211fc6352083c43a8eae2d51be WatchSource:0}: Error finding container 6ab3d144b558e6e0bc662e8208f8e16314565a211fc6352083c43a8eae2d51be: Status 404 returned error can't find the container with id 6ab3d144b558e6e0bc662e8208f8e16314565a211fc6352083c43a8eae2d51be Jan 26 14:59:48 crc kubenswrapper[4823]: I0126 14:59:48.023545 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" event={"ID":"4e623b4c-fb12-4aa5-a519-0ec22f564425","Type":"ContainerStarted","Data":"6ab3d144b558e6e0bc662e8208f8e16314565a211fc6352083c43a8eae2d51be"} Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.032255 4823 generic.go:334] "Generic (PLEG): container finished" podID="4e623b4c-fb12-4aa5-a519-0ec22f564425" containerID="e8f84b2f02b23b4442b468e03d514e294ad3b9138e09ef2ae4582abe22c7b46d" exitCode=0 Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.032329 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" event={"ID":"4e623b4c-fb12-4aa5-a519-0ec22f564425","Type":"ContainerDied","Data":"e8f84b2f02b23b4442b468e03d514e294ad3b9138e09ef2ae4582abe22c7b46d"} Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.110492 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cnp24"] Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.115198 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.120407 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cnp24"] Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.291202 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-utilities\") pod \"redhat-operators-cnp24\" (UID: \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\") " pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.291262 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxwh\" (UniqueName: \"kubernetes.io/projected/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-kube-api-access-prxwh\") pod \"redhat-operators-cnp24\" (UID: \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\") " pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.291342 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-catalog-content\") pod \"redhat-operators-cnp24\" (UID: \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\") " pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.393378 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-catalog-content\") pod \"redhat-operators-cnp24\" (UID: \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\") " pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.393558 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prxwh\" (UniqueName: \"kubernetes.io/projected/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-kube-api-access-prxwh\") pod \"redhat-operators-cnp24\" (UID: \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\") " pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.393586 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-utilities\") pod \"redhat-operators-cnp24\" (UID: \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\") " pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.394062 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-catalog-content\") pod \"redhat-operators-cnp24\" (UID: \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\") " pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.394085 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-utilities\") pod \"redhat-operators-cnp24\" (UID: \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\") " pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.417830 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prxwh\" (UniqueName: \"kubernetes.io/projected/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-kube-api-access-prxwh\") pod \"redhat-operators-cnp24\" (UID: \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\") " pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.441380 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 14:59:49 crc kubenswrapper[4823]: I0126 14:59:49.717899 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cnp24"] Jan 26 14:59:50 crc kubenswrapper[4823]: I0126 14:59:50.042637 4823 generic.go:334] "Generic (PLEG): container finished" podID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" containerID="eb27a9eba28fc39d1f981db01cc20100af082eee5e497ea8c7dc143e7f6a014d" exitCode=0 Jan 26 14:59:50 crc kubenswrapper[4823]: I0126 14:59:50.042711 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnp24" event={"ID":"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d","Type":"ContainerDied","Data":"eb27a9eba28fc39d1f981db01cc20100af082eee5e497ea8c7dc143e7f6a014d"} Jan 26 14:59:50 crc kubenswrapper[4823]: I0126 14:59:50.042770 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnp24" event={"ID":"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d","Type":"ContainerStarted","Data":"52ec45c33bf608033ae068f7ccd0a3f8ce1e302e9ad265e2c6cce52d0323ba1b"} Jan 26 14:59:51 crc kubenswrapper[4823]: I0126 14:59:51.052969 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnp24" event={"ID":"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d","Type":"ContainerStarted","Data":"8a931b84d3bd9eab7557ddc6ca02858a0adff1053716861b2880660f4134eb1a"} Jan 26 14:59:51 crc kubenswrapper[4823]: I0126 14:59:51.057000 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" event={"ID":"4e623b4c-fb12-4aa5-a519-0ec22f564425","Type":"ContainerStarted","Data":"47fc3638dd16dad4ece49d8e7caec441c4f9a260c26e979bc1edc4f732610f9a"} Jan 26 14:59:52 crc kubenswrapper[4823]: I0126 14:59:52.066798 4823 generic.go:334] "Generic (PLEG): container finished" podID="4e623b4c-fb12-4aa5-a519-0ec22f564425" containerID="47fc3638dd16dad4ece49d8e7caec441c4f9a260c26e979bc1edc4f732610f9a" exitCode=0 Jan 26 14:59:52 crc kubenswrapper[4823]: I0126 14:59:52.066878 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" event={"ID":"4e623b4c-fb12-4aa5-a519-0ec22f564425","Type":"ContainerDied","Data":"47fc3638dd16dad4ece49d8e7caec441c4f9a260c26e979bc1edc4f732610f9a"} Jan 26 14:59:53 crc kubenswrapper[4823]: I0126 14:59:53.074854 4823 generic.go:334] "Generic (PLEG): container finished" podID="4e623b4c-fb12-4aa5-a519-0ec22f564425" containerID="eb6f5876b40cbd99809e8fafaff375d80c98e1b2824211a5c0c65ff4e6252f74" exitCode=0 Jan 26 14:59:53 crc kubenswrapper[4823]: I0126 14:59:53.074997 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" event={"ID":"4e623b4c-fb12-4aa5-a519-0ec22f564425","Type":"ContainerDied","Data":"eb6f5876b40cbd99809e8fafaff375d80c98e1b2824211a5c0c65ff4e6252f74"} Jan 26 14:59:54 crc kubenswrapper[4823]: I0126 14:59:54.399263 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:54 crc kubenswrapper[4823]: I0126 14:59:54.512964 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7hhh\" (UniqueName: \"kubernetes.io/projected/4e623b4c-fb12-4aa5-a519-0ec22f564425-kube-api-access-v7hhh\") pod \"4e623b4c-fb12-4aa5-a519-0ec22f564425\" (UID: \"4e623b4c-fb12-4aa5-a519-0ec22f564425\") " Jan 26 14:59:54 crc kubenswrapper[4823]: I0126 14:59:54.513090 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e623b4c-fb12-4aa5-a519-0ec22f564425-util\") pod \"4e623b4c-fb12-4aa5-a519-0ec22f564425\" (UID: \"4e623b4c-fb12-4aa5-a519-0ec22f564425\") " Jan 26 14:59:54 crc kubenswrapper[4823]: I0126 14:59:54.513173 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e623b4c-fb12-4aa5-a519-0ec22f564425-bundle\") pod \"4e623b4c-fb12-4aa5-a519-0ec22f564425\" (UID: \"4e623b4c-fb12-4aa5-a519-0ec22f564425\") " Jan 26 14:59:54 crc kubenswrapper[4823]: I0126 14:59:54.514186 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e623b4c-fb12-4aa5-a519-0ec22f564425-bundle" (OuterVolumeSpecName: "bundle") pod "4e623b4c-fb12-4aa5-a519-0ec22f564425" (UID: "4e623b4c-fb12-4aa5-a519-0ec22f564425"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:59:54 crc kubenswrapper[4823]: I0126 14:59:54.523819 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e623b4c-fb12-4aa5-a519-0ec22f564425-kube-api-access-v7hhh" (OuterVolumeSpecName: "kube-api-access-v7hhh") pod "4e623b4c-fb12-4aa5-a519-0ec22f564425" (UID: "4e623b4c-fb12-4aa5-a519-0ec22f564425"). InnerVolumeSpecName "kube-api-access-v7hhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 14:59:54 crc kubenswrapper[4823]: I0126 14:59:54.528294 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e623b4c-fb12-4aa5-a519-0ec22f564425-util" (OuterVolumeSpecName: "util") pod "4e623b4c-fb12-4aa5-a519-0ec22f564425" (UID: "4e623b4c-fb12-4aa5-a519-0ec22f564425"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 14:59:54 crc kubenswrapper[4823]: I0126 14:59:54.614934 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7hhh\" (UniqueName: \"kubernetes.io/projected/4e623b4c-fb12-4aa5-a519-0ec22f564425-kube-api-access-v7hhh\") on node \"crc\" DevicePath \"\"" Jan 26 14:59:54 crc kubenswrapper[4823]: I0126 14:59:54.614999 4823 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4e623b4c-fb12-4aa5-a519-0ec22f564425-util\") on node \"crc\" DevicePath \"\"" Jan 26 14:59:54 crc kubenswrapper[4823]: I0126 14:59:54.615016 4823 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4e623b4c-fb12-4aa5-a519-0ec22f564425-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 14:59:55 crc kubenswrapper[4823]: I0126 14:59:55.092707 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" Jan 26 14:59:55 crc kubenswrapper[4823]: I0126 14:59:55.097485 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm" event={"ID":"4e623b4c-fb12-4aa5-a519-0ec22f564425","Type":"ContainerDied","Data":"6ab3d144b558e6e0bc662e8208f8e16314565a211fc6352083c43a8eae2d51be"} Jan 26 14:59:55 crc kubenswrapper[4823]: I0126 14:59:55.097538 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ab3d144b558e6e0bc662e8208f8e16314565a211fc6352083c43a8eae2d51be" Jan 26 14:59:55 crc kubenswrapper[4823]: I0126 14:59:55.099641 4823 generic.go:334] "Generic (PLEG): container finished" podID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" containerID="8a931b84d3bd9eab7557ddc6ca02858a0adff1053716861b2880660f4134eb1a" exitCode=0 Jan 26 14:59:55 crc kubenswrapper[4823]: I0126 14:59:55.099684 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnp24" event={"ID":"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d","Type":"ContainerDied","Data":"8a931b84d3bd9eab7557ddc6ca02858a0adff1053716861b2880660f4134eb1a"} Jan 26 14:59:56 crc kubenswrapper[4823]: I0126 14:59:56.109180 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnp24" event={"ID":"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d","Type":"ContainerStarted","Data":"4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e"} Jan 26 14:59:56 crc kubenswrapper[4823]: I0126 14:59:56.140154 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cnp24" podStartSLOduration=1.6193638529999999 podStartE2EDuration="7.140109153s" podCreationTimestamp="2026-01-26 14:59:49 +0000 UTC" firstStartedPulling="2026-01-26 14:59:50.044715361 +0000 UTC m=+786.730178466" lastFinishedPulling="2026-01-26 14:59:55.565460641 +0000 UTC m=+792.250923766" observedRunningTime="2026-01-26 14:59:56.133656037 +0000 UTC m=+792.819119192" watchObservedRunningTime="2026-01-26 14:59:56.140109153 +0000 UTC m=+792.825572318" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.643747 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tbvxm"] Jan 26 14:59:57 crc kubenswrapper[4823]: E0126 14:59:57.644488 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e623b4c-fb12-4aa5-a519-0ec22f564425" containerName="extract" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.644505 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e623b4c-fb12-4aa5-a519-0ec22f564425" containerName="extract" Jan 26 14:59:57 crc kubenswrapper[4823]: E0126 14:59:57.644528 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e623b4c-fb12-4aa5-a519-0ec22f564425" containerName="util" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.644535 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e623b4c-fb12-4aa5-a519-0ec22f564425" containerName="util" Jan 26 14:59:57 crc kubenswrapper[4823]: E0126 14:59:57.644547 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e623b4c-fb12-4aa5-a519-0ec22f564425" containerName="pull" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.644554 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e623b4c-fb12-4aa5-a519-0ec22f564425" containerName="pull" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.644660 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e623b4c-fb12-4aa5-a519-0ec22f564425" containerName="extract" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.645147 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tbvxm" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.647751 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tl28c" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.647987 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.648853 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.666615 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tbvxm"] Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.763916 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpkwf\" (UniqueName: \"kubernetes.io/projected/2938d719-ff17-4def-83c8-3c6b49cd6627-kube-api-access-xpkwf\") pod \"nmstate-operator-646758c888-tbvxm\" (UID: \"2938d719-ff17-4def-83c8-3c6b49cd6627\") " pod="openshift-nmstate/nmstate-operator-646758c888-tbvxm" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.865851 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpkwf\" (UniqueName: \"kubernetes.io/projected/2938d719-ff17-4def-83c8-3c6b49cd6627-kube-api-access-xpkwf\") pod \"nmstate-operator-646758c888-tbvxm\" (UID: \"2938d719-ff17-4def-83c8-3c6b49cd6627\") " pod="openshift-nmstate/nmstate-operator-646758c888-tbvxm" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.890563 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpkwf\" (UniqueName: \"kubernetes.io/projected/2938d719-ff17-4def-83c8-3c6b49cd6627-kube-api-access-xpkwf\") pod \"nmstate-operator-646758c888-tbvxm\" (UID: \"2938d719-ff17-4def-83c8-3c6b49cd6627\") " pod="openshift-nmstate/nmstate-operator-646758c888-tbvxm" Jan 26 14:59:57 crc kubenswrapper[4823]: I0126 14:59:57.961117 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-tbvxm" Jan 26 14:59:58 crc kubenswrapper[4823]: I0126 14:59:58.246995 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-tbvxm"] Jan 26 14:59:58 crc kubenswrapper[4823]: W0126 14:59:58.256630 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2938d719_ff17_4def_83c8_3c6b49cd6627.slice/crio-0df2ae6654db601b0e585f2504e9228a01514df8c8349b0d54dd2dfd20141cfc WatchSource:0}: Error finding container 0df2ae6654db601b0e585f2504e9228a01514df8c8349b0d54dd2dfd20141cfc: Status 404 returned error can't find the container with id 0df2ae6654db601b0e585f2504e9228a01514df8c8349b0d54dd2dfd20141cfc Jan 26 14:59:59 crc kubenswrapper[4823]: I0126 14:59:59.132242 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tbvxm" event={"ID":"2938d719-ff17-4def-83c8-3c6b49cd6627","Type":"ContainerStarted","Data":"0df2ae6654db601b0e585f2504e9228a01514df8c8349b0d54dd2dfd20141cfc"} Jan 26 14:59:59 crc kubenswrapper[4823]: I0126 14:59:59.442233 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 14:59:59 crc kubenswrapper[4823]: I0126 14:59:59.442333 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.174410 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l"] Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.175968 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.179747 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.180077 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.187329 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l"] Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.303030 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a95fb51d-24d9-42e0-a51c-18314aadfb14-config-volume\") pod \"collect-profiles-29490660-k4l9l\" (UID: \"a95fb51d-24d9-42e0-a51c-18314aadfb14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.303106 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnl8x\" (UniqueName: \"kubernetes.io/projected/a95fb51d-24d9-42e0-a51c-18314aadfb14-kube-api-access-cnl8x\") pod \"collect-profiles-29490660-k4l9l\" (UID: \"a95fb51d-24d9-42e0-a51c-18314aadfb14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.303255 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a95fb51d-24d9-42e0-a51c-18314aadfb14-secret-volume\") pod \"collect-profiles-29490660-k4l9l\" (UID: \"a95fb51d-24d9-42e0-a51c-18314aadfb14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.405982 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a95fb51d-24d9-42e0-a51c-18314aadfb14-config-volume\") pod \"collect-profiles-29490660-k4l9l\" (UID: \"a95fb51d-24d9-42e0-a51c-18314aadfb14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.406129 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnl8x\" (UniqueName: \"kubernetes.io/projected/a95fb51d-24d9-42e0-a51c-18314aadfb14-kube-api-access-cnl8x\") pod \"collect-profiles-29490660-k4l9l\" (UID: \"a95fb51d-24d9-42e0-a51c-18314aadfb14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.406334 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a95fb51d-24d9-42e0-a51c-18314aadfb14-secret-volume\") pod \"collect-profiles-29490660-k4l9l\" (UID: \"a95fb51d-24d9-42e0-a51c-18314aadfb14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.407132 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a95fb51d-24d9-42e0-a51c-18314aadfb14-config-volume\") pod \"collect-profiles-29490660-k4l9l\" (UID: \"a95fb51d-24d9-42e0-a51c-18314aadfb14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.415511 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a95fb51d-24d9-42e0-a51c-18314aadfb14-secret-volume\") pod \"collect-profiles-29490660-k4l9l\" (UID: \"a95fb51d-24d9-42e0-a51c-18314aadfb14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.424636 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnl8x\" (UniqueName: \"kubernetes.io/projected/a95fb51d-24d9-42e0-a51c-18314aadfb14-kube-api-access-cnl8x\") pod \"collect-profiles-29490660-k4l9l\" (UID: \"a95fb51d-24d9-42e0-a51c-18314aadfb14\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.481898 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cnp24" podUID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" containerName="registry-server" probeResult="failure" output=< Jan 26 15:00:00 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 15:00:00 crc kubenswrapper[4823]: > Jan 26 15:00:00 crc kubenswrapper[4823]: I0126 15:00:00.512819 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:01 crc kubenswrapper[4823]: I0126 15:00:01.188599 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l"] Jan 26 15:00:02 crc kubenswrapper[4823]: I0126 15:00:02.155396 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-tbvxm" event={"ID":"2938d719-ff17-4def-83c8-3c6b49cd6627","Type":"ContainerStarted","Data":"f23f114277cfec0450008e6f1b54ff47191686a59082895162dd82a2ab96a2de"} Jan 26 15:00:02 crc kubenswrapper[4823]: I0126 15:00:02.157479 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" event={"ID":"a95fb51d-24d9-42e0-a51c-18314aadfb14","Type":"ContainerStarted","Data":"687b926217a3cc4f22b6e8c6a17a347b77024d492b783d3a7473253a968a51ab"} Jan 26 15:00:02 crc kubenswrapper[4823]: I0126 15:00:02.157506 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" event={"ID":"a95fb51d-24d9-42e0-a51c-18314aadfb14","Type":"ContainerStarted","Data":"b5f57cfd432e50fe1f415d51da32022fc3a62a67f677b861f67ee575a33a3091"} Jan 26 15:00:02 crc kubenswrapper[4823]: I0126 15:00:02.191669 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-tbvxm" podStartSLOduration=1.490718161 podStartE2EDuration="5.191639647s" podCreationTimestamp="2026-01-26 14:59:57 +0000 UTC" firstStartedPulling="2026-01-26 14:59:58.259919025 +0000 UTC m=+794.945382120" lastFinishedPulling="2026-01-26 15:00:01.960840501 +0000 UTC m=+798.646303606" observedRunningTime="2026-01-26 15:00:02.177798129 +0000 UTC m=+798.863261234" watchObservedRunningTime="2026-01-26 15:00:02.191639647 +0000 UTC m=+798.877102752" Jan 26 15:00:02 crc kubenswrapper[4823]: I0126 15:00:02.215948 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" podStartSLOduration=2.215914251 podStartE2EDuration="2.215914251s" podCreationTimestamp="2026-01-26 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:00:02.215280804 +0000 UTC m=+798.900743909" watchObservedRunningTime="2026-01-26 15:00:02.215914251 +0000 UTC m=+798.901377356" Jan 26 15:00:04 crc kubenswrapper[4823]: I0126 15:00:04.169247 4823 generic.go:334] "Generic (PLEG): container finished" podID="a95fb51d-24d9-42e0-a51c-18314aadfb14" containerID="687b926217a3cc4f22b6e8c6a17a347b77024d492b783d3a7473253a968a51ab" exitCode=0 Jan 26 15:00:04 crc kubenswrapper[4823]: I0126 15:00:04.169347 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" event={"ID":"a95fb51d-24d9-42e0-a51c-18314aadfb14","Type":"ContainerDied","Data":"687b926217a3cc4f22b6e8c6a17a347b77024d492b783d3a7473253a968a51ab"} Jan 26 15:00:04 crc kubenswrapper[4823]: I0126 15:00:04.508823 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:00:04 crc kubenswrapper[4823]: I0126 15:00:04.508944 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:00:05 crc kubenswrapper[4823]: I0126 15:00:05.442313 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:05 crc kubenswrapper[4823]: I0126 15:00:05.540266 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnl8x\" (UniqueName: \"kubernetes.io/projected/a95fb51d-24d9-42e0-a51c-18314aadfb14-kube-api-access-cnl8x\") pod \"a95fb51d-24d9-42e0-a51c-18314aadfb14\" (UID: \"a95fb51d-24d9-42e0-a51c-18314aadfb14\") " Jan 26 15:00:05 crc kubenswrapper[4823]: I0126 15:00:05.540397 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a95fb51d-24d9-42e0-a51c-18314aadfb14-secret-volume\") pod \"a95fb51d-24d9-42e0-a51c-18314aadfb14\" (UID: \"a95fb51d-24d9-42e0-a51c-18314aadfb14\") " Jan 26 15:00:05 crc kubenswrapper[4823]: I0126 15:00:05.540450 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a95fb51d-24d9-42e0-a51c-18314aadfb14-config-volume\") pod \"a95fb51d-24d9-42e0-a51c-18314aadfb14\" (UID: \"a95fb51d-24d9-42e0-a51c-18314aadfb14\") " Jan 26 15:00:05 crc kubenswrapper[4823]: I0126 15:00:05.541834 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a95fb51d-24d9-42e0-a51c-18314aadfb14-config-volume" (OuterVolumeSpecName: "config-volume") pod "a95fb51d-24d9-42e0-a51c-18314aadfb14" (UID: "a95fb51d-24d9-42e0-a51c-18314aadfb14"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:00:05 crc kubenswrapper[4823]: I0126 15:00:05.548208 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a95fb51d-24d9-42e0-a51c-18314aadfb14-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a95fb51d-24d9-42e0-a51c-18314aadfb14" (UID: "a95fb51d-24d9-42e0-a51c-18314aadfb14"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:00:05 crc kubenswrapper[4823]: I0126 15:00:05.548259 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a95fb51d-24d9-42e0-a51c-18314aadfb14-kube-api-access-cnl8x" (OuterVolumeSpecName: "kube-api-access-cnl8x") pod "a95fb51d-24d9-42e0-a51c-18314aadfb14" (UID: "a95fb51d-24d9-42e0-a51c-18314aadfb14"). InnerVolumeSpecName "kube-api-access-cnl8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:00:05 crc kubenswrapper[4823]: I0126 15:00:05.642635 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a95fb51d-24d9-42e0-a51c-18314aadfb14-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:05 crc kubenswrapper[4823]: I0126 15:00:05.642713 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a95fb51d-24d9-42e0-a51c-18314aadfb14-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:05 crc kubenswrapper[4823]: I0126 15:00:05.642726 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnl8x\" (UniqueName: \"kubernetes.io/projected/a95fb51d-24d9-42e0-a51c-18314aadfb14-kube-api-access-cnl8x\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:06 crc kubenswrapper[4823]: I0126 15:00:06.184209 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" event={"ID":"a95fb51d-24d9-42e0-a51c-18314aadfb14","Type":"ContainerDied","Data":"b5f57cfd432e50fe1f415d51da32022fc3a62a67f677b861f67ee575a33a3091"} Jan 26 15:00:06 crc kubenswrapper[4823]: I0126 15:00:06.184286 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5f57cfd432e50fe1f415d51da32022fc3a62a67f677b861f67ee575a33a3091" Jan 26 15:00:06 crc kubenswrapper[4823]: I0126 15:00:06.184630 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.317406 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b9p4g"] Jan 26 15:00:07 crc kubenswrapper[4823]: E0126 15:00:07.317849 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a95fb51d-24d9-42e0-a51c-18314aadfb14" containerName="collect-profiles" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.317868 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a95fb51d-24d9-42e0-a51c-18314aadfb14" containerName="collect-profiles" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.318007 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a95fb51d-24d9-42e0-a51c-18314aadfb14" containerName="collect-profiles" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.318955 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b9p4g" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.321155 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-dr8ls" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.322544 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd"] Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.323670 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.325124 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.340844 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b9p4g"] Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.346343 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd"] Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.353536 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-2qjs7"] Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.354785 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.470192 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m"] Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.471129 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.471172 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/05c32998-fc69-48d2-b15a-98654d444a3f-dbus-socket\") pod \"nmstate-handler-2qjs7\" (UID: \"05c32998-fc69-48d2-b15a-98654d444a3f\") " pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.471229 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nrpvd\" (UID: \"ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.471288 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sshtv\" (UniqueName: \"kubernetes.io/projected/ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc-kube-api-access-sshtv\") pod \"nmstate-webhook-8474b5b9d8-nrpvd\" (UID: \"ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.471323 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/05c32998-fc69-48d2-b15a-98654d444a3f-ovs-socket\") pod \"nmstate-handler-2qjs7\" (UID: \"05c32998-fc69-48d2-b15a-98654d444a3f\") " pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.471449 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/05c32998-fc69-48d2-b15a-98654d444a3f-nmstate-lock\") pod \"nmstate-handler-2qjs7\" (UID: \"05c32998-fc69-48d2-b15a-98654d444a3f\") " pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.471550 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wtt5\" (UniqueName: \"kubernetes.io/projected/139e1af7-704a-48d2-86ca-6b05e2307f72-kube-api-access-4wtt5\") pod \"nmstate-metrics-54757c584b-b9p4g\" (UID: \"139e1af7-704a-48d2-86ca-6b05e2307f72\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b9p4g" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.471597 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvc8f\" (UniqueName: \"kubernetes.io/projected/05c32998-fc69-48d2-b15a-98654d444a3f-kube-api-access-hvc8f\") pod \"nmstate-handler-2qjs7\" (UID: \"05c32998-fc69-48d2-b15a-98654d444a3f\") " pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.476298 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jfcqw" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.476775 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.477153 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.493879 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m"] Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.573151 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/05c32998-fc69-48d2-b15a-98654d444a3f-dbus-socket\") pod \"nmstate-handler-2qjs7\" (UID: \"05c32998-fc69-48d2-b15a-98654d444a3f\") " pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.573467 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nrpvd\" (UID: \"ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.573563 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/05c32998-fc69-48d2-b15a-98654d444a3f-dbus-socket\") pod \"nmstate-handler-2qjs7\" (UID: \"05c32998-fc69-48d2-b15a-98654d444a3f\") " pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: E0126 15:00:07.573651 4823 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 26 15:00:07 crc kubenswrapper[4823]: E0126 15:00:07.573899 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc-tls-key-pair podName:ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc nodeName:}" failed. No retries permitted until 2026-01-26 15:00:08.073872924 +0000 UTC m=+804.759336029 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-nrpvd" (UID: "ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc") : secret "openshift-nmstate-webhook" not found Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.573945 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sshtv\" (UniqueName: \"kubernetes.io/projected/ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc-kube-api-access-sshtv\") pod \"nmstate-webhook-8474b5b9d8-nrpvd\" (UID: \"ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.573983 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/05c32998-fc69-48d2-b15a-98654d444a3f-ovs-socket\") pod \"nmstate-handler-2qjs7\" (UID: \"05c32998-fc69-48d2-b15a-98654d444a3f\") " pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.574008 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/05c32998-fc69-48d2-b15a-98654d444a3f-nmstate-lock\") pod \"nmstate-handler-2qjs7\" (UID: \"05c32998-fc69-48d2-b15a-98654d444a3f\") " pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.574032 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wtt5\" (UniqueName: \"kubernetes.io/projected/139e1af7-704a-48d2-86ca-6b05e2307f72-kube-api-access-4wtt5\") pod \"nmstate-metrics-54757c584b-b9p4g\" (UID: \"139e1af7-704a-48d2-86ca-6b05e2307f72\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b9p4g" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.574061 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvc8f\" (UniqueName: \"kubernetes.io/projected/05c32998-fc69-48d2-b15a-98654d444a3f-kube-api-access-hvc8f\") pod \"nmstate-handler-2qjs7\" (UID: \"05c32998-fc69-48d2-b15a-98654d444a3f\") " pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.574115 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/05c32998-fc69-48d2-b15a-98654d444a3f-ovs-socket\") pod \"nmstate-handler-2qjs7\" (UID: \"05c32998-fc69-48d2-b15a-98654d444a3f\") " pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.574218 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/05c32998-fc69-48d2-b15a-98654d444a3f-nmstate-lock\") pod \"nmstate-handler-2qjs7\" (UID: \"05c32998-fc69-48d2-b15a-98654d444a3f\") " pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.596036 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sshtv\" (UniqueName: \"kubernetes.io/projected/ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc-kube-api-access-sshtv\") pod \"nmstate-webhook-8474b5b9d8-nrpvd\" (UID: \"ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.598029 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wtt5\" (UniqueName: \"kubernetes.io/projected/139e1af7-704a-48d2-86ca-6b05e2307f72-kube-api-access-4wtt5\") pod \"nmstate-metrics-54757c584b-b9p4g\" (UID: \"139e1af7-704a-48d2-86ca-6b05e2307f72\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b9p4g" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.601000 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvc8f\" (UniqueName: \"kubernetes.io/projected/05c32998-fc69-48d2-b15a-98654d444a3f-kube-api-access-hvc8f\") pod \"nmstate-handler-2qjs7\" (UID: \"05c32998-fc69-48d2-b15a-98654d444a3f\") " pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.639718 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b9p4g" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.674825 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7451d383-fb90-4543-b142-792890477728-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-l656m\" (UID: \"7451d383-fb90-4543-b142-792890477728\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.674897 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdrg5\" (UniqueName: \"kubernetes.io/projected/7451d383-fb90-4543-b142-792890477728-kube-api-access-sdrg5\") pod \"nmstate-console-plugin-7754f76f8b-l656m\" (UID: \"7451d383-fb90-4543-b142-792890477728\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.674928 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7451d383-fb90-4543-b142-792890477728-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-l656m\" (UID: \"7451d383-fb90-4543-b142-792890477728\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.681723 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.698492 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-595f694657-5lj4z"] Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.699586 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.716799 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-595f694657-5lj4z"] Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.775803 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-console-serving-cert\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.775884 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7451d383-fb90-4543-b142-792890477728-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-l656m\" (UID: \"7451d383-fb90-4543-b142-792890477728\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.775907 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-console-oauth-config\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.775932 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdrg5\" (UniqueName: \"kubernetes.io/projected/7451d383-fb90-4543-b142-792890477728-kube-api-access-sdrg5\") pod \"nmstate-console-plugin-7754f76f8b-l656m\" (UID: \"7451d383-fb90-4543-b142-792890477728\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.775953 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-service-ca\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.775975 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7451d383-fb90-4543-b142-792890477728-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-l656m\" (UID: \"7451d383-fb90-4543-b142-792890477728\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.776011 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-oauth-serving-cert\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.776054 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-console-config\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.776074 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-trusted-ca-bundle\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.776098 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whjlr\" (UniqueName: \"kubernetes.io/projected/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-kube-api-access-whjlr\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: E0126 15:00:07.776092 4823 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 26 15:00:07 crc kubenswrapper[4823]: E0126 15:00:07.776193 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7451d383-fb90-4543-b142-792890477728-plugin-serving-cert podName:7451d383-fb90-4543-b142-792890477728 nodeName:}" failed. No retries permitted until 2026-01-26 15:00:08.276164161 +0000 UTC m=+804.961627266 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/7451d383-fb90-4543-b142-792890477728-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-l656m" (UID: "7451d383-fb90-4543-b142-792890477728") : secret "plugin-serving-cert" not found Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.778226 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7451d383-fb90-4543-b142-792890477728-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-l656m\" (UID: \"7451d383-fb90-4543-b142-792890477728\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.797059 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdrg5\" (UniqueName: \"kubernetes.io/projected/7451d383-fb90-4543-b142-792890477728-kube-api-access-sdrg5\") pod \"nmstate-console-plugin-7754f76f8b-l656m\" (UID: \"7451d383-fb90-4543-b142-792890477728\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.878037 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-trusted-ca-bundle\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.878574 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whjlr\" (UniqueName: \"kubernetes.io/projected/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-kube-api-access-whjlr\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.878603 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-console-serving-cert\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.878649 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-console-oauth-config\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.879479 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-service-ca\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.879525 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-oauth-serving-cert\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.879558 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-console-config\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.880172 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-console-config\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.880728 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-service-ca\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.881242 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-oauth-serving-cert\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.883257 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-trusted-ca-bundle\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.885600 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-console-serving-cert\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.886708 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-console-oauth-config\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.902321 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whjlr\" (UniqueName: \"kubernetes.io/projected/22527ed1-9b6c-4bdb-bc7b-3284abde6a66-kube-api-access-whjlr\") pod \"console-595f694657-5lj4z\" (UID: \"22527ed1-9b6c-4bdb-bc7b-3284abde6a66\") " pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:07 crc kubenswrapper[4823]: I0126 15:00:07.961085 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b9p4g"] Jan 26 15:00:07 crc kubenswrapper[4823]: W0126 15:00:07.967709 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod139e1af7_704a_48d2_86ca_6b05e2307f72.slice/crio-94711c7479b45b1c8480955b4a897db4a8e1eb75a731f96ac7e8bd2baecc7ddd WatchSource:0}: Error finding container 94711c7479b45b1c8480955b4a897db4a8e1eb75a731f96ac7e8bd2baecc7ddd: Status 404 returned error can't find the container with id 94711c7479b45b1c8480955b4a897db4a8e1eb75a731f96ac7e8bd2baecc7ddd Jan 26 15:00:08 crc kubenswrapper[4823]: I0126 15:00:08.046455 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:08 crc kubenswrapper[4823]: I0126 15:00:08.082696 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nrpvd\" (UID: \"ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" Jan 26 15:00:08 crc kubenswrapper[4823]: I0126 15:00:08.088028 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-nrpvd\" (UID: \"ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" Jan 26 15:00:08 crc kubenswrapper[4823]: I0126 15:00:08.201022 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2qjs7" event={"ID":"05c32998-fc69-48d2-b15a-98654d444a3f","Type":"ContainerStarted","Data":"351000f6eec81236c257480f524ffd81e466c1178e6cf3facf662771cf3b924a"} Jan 26 15:00:08 crc kubenswrapper[4823]: I0126 15:00:08.203135 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b9p4g" event={"ID":"139e1af7-704a-48d2-86ca-6b05e2307f72","Type":"ContainerStarted","Data":"94711c7479b45b1c8480955b4a897db4a8e1eb75a731f96ac7e8bd2baecc7ddd"} Jan 26 15:00:08 crc kubenswrapper[4823]: I0126 15:00:08.254050 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" Jan 26 15:00:08 crc kubenswrapper[4823]: I0126 15:00:08.258677 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-595f694657-5lj4z"] Jan 26 15:00:08 crc kubenswrapper[4823]: W0126 15:00:08.278860 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22527ed1_9b6c_4bdb_bc7b_3284abde6a66.slice/crio-6b130edbf69c31c64180b5dd2f6bf813f78b1cbf20bfcabccc06a8f408542f6b WatchSource:0}: Error finding container 6b130edbf69c31c64180b5dd2f6bf813f78b1cbf20bfcabccc06a8f408542f6b: Status 404 returned error can't find the container with id 6b130edbf69c31c64180b5dd2f6bf813f78b1cbf20bfcabccc06a8f408542f6b Jan 26 15:00:08 crc kubenswrapper[4823]: I0126 15:00:08.286205 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7451d383-fb90-4543-b142-792890477728-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-l656m\" (UID: \"7451d383-fb90-4543-b142-792890477728\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" Jan 26 15:00:08 crc kubenswrapper[4823]: I0126 15:00:08.289684 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7451d383-fb90-4543-b142-792890477728-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-l656m\" (UID: \"7451d383-fb90-4543-b142-792890477728\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" Jan 26 15:00:08 crc kubenswrapper[4823]: I0126 15:00:08.393088 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" Jan 26 15:00:08 crc kubenswrapper[4823]: I0126 15:00:08.978460 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd"] Jan 26 15:00:09 crc kubenswrapper[4823]: W0126 15:00:09.001555 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac7cc86e_81a5_4a00_95dd_183f1b1ee5dc.slice/crio-949c2729750ee02b3e872a7015854e983f6ea1ca696e79bcbf8be2bdff5edfd1 WatchSource:0}: Error finding container 949c2729750ee02b3e872a7015854e983f6ea1ca696e79bcbf8be2bdff5edfd1: Status 404 returned error can't find the container with id 949c2729750ee02b3e872a7015854e983f6ea1ca696e79bcbf8be2bdff5edfd1 Jan 26 15:00:09 crc kubenswrapper[4823]: I0126 15:00:09.109088 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m"] Jan 26 15:00:09 crc kubenswrapper[4823]: I0126 15:00:09.211174 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" event={"ID":"7451d383-fb90-4543-b142-792890477728","Type":"ContainerStarted","Data":"a986494f6e2e185e3f7b0d29d9c58f6a1465deb58ecf81f31a2d287516b33b77"} Jan 26 15:00:09 crc kubenswrapper[4823]: I0126 15:00:09.212638 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" event={"ID":"ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc","Type":"ContainerStarted","Data":"949c2729750ee02b3e872a7015854e983f6ea1ca696e79bcbf8be2bdff5edfd1"} Jan 26 15:00:09 crc kubenswrapper[4823]: I0126 15:00:09.214125 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595f694657-5lj4z" event={"ID":"22527ed1-9b6c-4bdb-bc7b-3284abde6a66","Type":"ContainerStarted","Data":"90a35f0df46ea219eae2915cb784f8def5500b0c3219d6073d9020365693eabc"} Jan 26 15:00:09 crc kubenswrapper[4823]: I0126 15:00:09.214147 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-595f694657-5lj4z" event={"ID":"22527ed1-9b6c-4bdb-bc7b-3284abde6a66","Type":"ContainerStarted","Data":"6b130edbf69c31c64180b5dd2f6bf813f78b1cbf20bfcabccc06a8f408542f6b"} Jan 26 15:00:09 crc kubenswrapper[4823]: I0126 15:00:09.234426 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-595f694657-5lj4z" podStartSLOduration=2.234349774 podStartE2EDuration="2.234349774s" podCreationTimestamp="2026-01-26 15:00:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:00:09.23417949 +0000 UTC m=+805.919642605" watchObservedRunningTime="2026-01-26 15:00:09.234349774 +0000 UTC m=+805.919812879" Jan 26 15:00:09 crc kubenswrapper[4823]: I0126 15:00:09.526711 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 15:00:09 crc kubenswrapper[4823]: I0126 15:00:09.603736 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 15:00:09 crc kubenswrapper[4823]: I0126 15:00:09.820893 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cnp24"] Jan 26 15:00:11 crc kubenswrapper[4823]: I0126 15:00:11.227854 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cnp24" podUID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" containerName="registry-server" containerID="cri-o://4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e" gracePeriod=2 Jan 26 15:00:11 crc kubenswrapper[4823]: I0126 15:00:11.669143 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 15:00:11 crc kubenswrapper[4823]: I0126 15:00:11.747790 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prxwh\" (UniqueName: \"kubernetes.io/projected/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-kube-api-access-prxwh\") pod \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\" (UID: \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\") " Jan 26 15:00:11 crc kubenswrapper[4823]: I0126 15:00:11.748449 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-utilities\") pod \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\" (UID: \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\") " Jan 26 15:00:11 crc kubenswrapper[4823]: I0126 15:00:11.748546 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-catalog-content\") pod \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\" (UID: \"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d\") " Jan 26 15:00:11 crc kubenswrapper[4823]: I0126 15:00:11.749411 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-utilities" (OuterVolumeSpecName: "utilities") pod "3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" (UID: "3a5c0b2f-051f-4811-ac92-b1cbfe0c561d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4823]: I0126 15:00:11.758245 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-kube-api-access-prxwh" (OuterVolumeSpecName: "kube-api-access-prxwh") pod "3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" (UID: "3a5c0b2f-051f-4811-ac92-b1cbfe0c561d"). InnerVolumeSpecName "kube-api-access-prxwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:00:11 crc kubenswrapper[4823]: I0126 15:00:11.898706 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:11 crc kubenswrapper[4823]: I0126 15:00:11.898755 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prxwh\" (UniqueName: \"kubernetes.io/projected/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-kube-api-access-prxwh\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:11 crc kubenswrapper[4823]: I0126 15:00:11.948695 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" (UID: "3a5c0b2f-051f-4811-ac92-b1cbfe0c561d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.000172 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.238498 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-2qjs7" event={"ID":"05c32998-fc69-48d2-b15a-98654d444a3f","Type":"ContainerStarted","Data":"5b6b124c87721257f8a77c431ee1200e5f27ce63ec488679c7ac792be6b73a6a"} Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.238828 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.242728 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b9p4g" event={"ID":"139e1af7-704a-48d2-86ca-6b05e2307f72","Type":"ContainerStarted","Data":"1bb02ddd42e0afe54e52d0aed313d92ff88b3c9d8db407a467e7db1d205e5996"} Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.245107 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" event={"ID":"ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc","Type":"ContainerStarted","Data":"a91f64c27efca8736e9a68a1f3b5c13b6091b2a06cbdecfe43c84ce50b0322d9"} Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.245256 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.248067 4823 generic.go:334] "Generic (PLEG): container finished" podID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" containerID="4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e" exitCode=0 Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.248135 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cnp24" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.248138 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnp24" event={"ID":"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d","Type":"ContainerDied","Data":"4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e"} Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.248532 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cnp24" event={"ID":"3a5c0b2f-051f-4811-ac92-b1cbfe0c561d","Type":"ContainerDied","Data":"52ec45c33bf608033ae068f7ccd0a3f8ce1e302e9ad265e2c6cce52d0323ba1b"} Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.248564 4823 scope.go:117] "RemoveContainer" containerID="4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.263955 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-2qjs7" podStartSLOduration=1.769150884 podStartE2EDuration="5.263921545s" podCreationTimestamp="2026-01-26 15:00:07 +0000 UTC" firstStartedPulling="2026-01-26 15:00:07.745347329 +0000 UTC m=+804.430810434" lastFinishedPulling="2026-01-26 15:00:11.24011798 +0000 UTC m=+807.925581095" observedRunningTime="2026-01-26 15:00:12.25859861 +0000 UTC m=+808.944061715" watchObservedRunningTime="2026-01-26 15:00:12.263921545 +0000 UTC m=+808.949384650" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.289219 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" podStartSLOduration=3.056062188 podStartE2EDuration="5.289196076s" podCreationTimestamp="2026-01-26 15:00:07 +0000 UTC" firstStartedPulling="2026-01-26 15:00:09.010820827 +0000 UTC m=+805.696283932" lastFinishedPulling="2026-01-26 15:00:11.243954715 +0000 UTC m=+807.929417820" observedRunningTime="2026-01-26 15:00:12.284809906 +0000 UTC m=+808.970273011" watchObservedRunningTime="2026-01-26 15:00:12.289196076 +0000 UTC m=+808.974659181" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.305127 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cnp24"] Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.313342 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cnp24"] Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.386943 4823 scope.go:117] "RemoveContainer" containerID="8a931b84d3bd9eab7557ddc6ca02858a0adff1053716861b2880660f4134eb1a" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.446127 4823 scope.go:117] "RemoveContainer" containerID="eb27a9eba28fc39d1f981db01cc20100af082eee5e497ea8c7dc143e7f6a014d" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.475697 4823 scope.go:117] "RemoveContainer" containerID="4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e" Jan 26 15:00:12 crc kubenswrapper[4823]: E0126 15:00:12.477459 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e\": container with ID starting with 4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e not found: ID does not exist" containerID="4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.477524 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e"} err="failed to get container status \"4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e\": rpc error: code = NotFound desc = could not find container \"4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e\": container with ID starting with 4dc32469179773c658669be6ccfad3f9763bf86786915d27f3bf08f2d8df7a4e not found: ID does not exist" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.477574 4823 scope.go:117] "RemoveContainer" containerID="8a931b84d3bd9eab7557ddc6ca02858a0adff1053716861b2880660f4134eb1a" Jan 26 15:00:12 crc kubenswrapper[4823]: E0126 15:00:12.478141 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a931b84d3bd9eab7557ddc6ca02858a0adff1053716861b2880660f4134eb1a\": container with ID starting with 8a931b84d3bd9eab7557ddc6ca02858a0adff1053716861b2880660f4134eb1a not found: ID does not exist" containerID="8a931b84d3bd9eab7557ddc6ca02858a0adff1053716861b2880660f4134eb1a" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.478198 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a931b84d3bd9eab7557ddc6ca02858a0adff1053716861b2880660f4134eb1a"} err="failed to get container status \"8a931b84d3bd9eab7557ddc6ca02858a0adff1053716861b2880660f4134eb1a\": rpc error: code = NotFound desc = could not find container \"8a931b84d3bd9eab7557ddc6ca02858a0adff1053716861b2880660f4134eb1a\": container with ID starting with 8a931b84d3bd9eab7557ddc6ca02858a0adff1053716861b2880660f4134eb1a not found: ID does not exist" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.478247 4823 scope.go:117] "RemoveContainer" containerID="eb27a9eba28fc39d1f981db01cc20100af082eee5e497ea8c7dc143e7f6a014d" Jan 26 15:00:12 crc kubenswrapper[4823]: E0126 15:00:12.478658 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb27a9eba28fc39d1f981db01cc20100af082eee5e497ea8c7dc143e7f6a014d\": container with ID starting with eb27a9eba28fc39d1f981db01cc20100af082eee5e497ea8c7dc143e7f6a014d not found: ID does not exist" containerID="eb27a9eba28fc39d1f981db01cc20100af082eee5e497ea8c7dc143e7f6a014d" Jan 26 15:00:12 crc kubenswrapper[4823]: I0126 15:00:12.478690 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb27a9eba28fc39d1f981db01cc20100af082eee5e497ea8c7dc143e7f6a014d"} err="failed to get container status \"eb27a9eba28fc39d1f981db01cc20100af082eee5e497ea8c7dc143e7f6a014d\": rpc error: code = NotFound desc = could not find container \"eb27a9eba28fc39d1f981db01cc20100af082eee5e497ea8c7dc143e7f6a014d\": container with ID starting with eb27a9eba28fc39d1f981db01cc20100af082eee5e497ea8c7dc143e7f6a014d not found: ID does not exist" Jan 26 15:00:13 crc kubenswrapper[4823]: I0126 15:00:13.258992 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" event={"ID":"7451d383-fb90-4543-b142-792890477728","Type":"ContainerStarted","Data":"0f24d9bbdae2bb2b0c6aef4eef88a364d1255ea983b629e66a129892c14d126e"} Jan 26 15:00:13 crc kubenswrapper[4823]: I0126 15:00:13.290713 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-l656m" podStartSLOduration=2.927271597 podStartE2EDuration="6.29067882s" podCreationTimestamp="2026-01-26 15:00:07 +0000 UTC" firstStartedPulling="2026-01-26 15:00:09.115438095 +0000 UTC m=+805.800901210" lastFinishedPulling="2026-01-26 15:00:12.478845338 +0000 UTC m=+809.164308433" observedRunningTime="2026-01-26 15:00:13.276851372 +0000 UTC m=+809.962314477" watchObservedRunningTime="2026-01-26 15:00:13.29067882 +0000 UTC m=+809.976141935" Jan 26 15:00:13 crc kubenswrapper[4823]: I0126 15:00:13.580464 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" path="/var/lib/kubelet/pods/3a5c0b2f-051f-4811-ac92-b1cbfe0c561d/volumes" Jan 26 15:00:14 crc kubenswrapper[4823]: I0126 15:00:14.266138 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b9p4g" event={"ID":"139e1af7-704a-48d2-86ca-6b05e2307f72","Type":"ContainerStarted","Data":"38323834cfd6b51540783698a50c4736bcbae98a1fb00b20354f3f93f6404ccb"} Jan 26 15:00:14 crc kubenswrapper[4823]: I0126 15:00:14.288889 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-b9p4g" podStartSLOduration=1.572590443 podStartE2EDuration="7.288873126s" podCreationTimestamp="2026-01-26 15:00:07 +0000 UTC" firstStartedPulling="2026-01-26 15:00:07.970732547 +0000 UTC m=+804.656195652" lastFinishedPulling="2026-01-26 15:00:13.68701523 +0000 UTC m=+810.372478335" observedRunningTime="2026-01-26 15:00:14.287245341 +0000 UTC m=+810.972708446" watchObservedRunningTime="2026-01-26 15:00:14.288873126 +0000 UTC m=+810.974336231" Jan 26 15:00:17 crc kubenswrapper[4823]: I0126 15:00:17.718310 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-2qjs7" Jan 26 15:00:18 crc kubenswrapper[4823]: I0126 15:00:18.047922 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:18 crc kubenswrapper[4823]: I0126 15:00:18.048282 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:18 crc kubenswrapper[4823]: I0126 15:00:18.054093 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:18 crc kubenswrapper[4823]: I0126 15:00:18.304098 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-595f694657-5lj4z" Jan 26 15:00:18 crc kubenswrapper[4823]: I0126 15:00:18.364327 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bbxp2"] Jan 26 15:00:28 crc kubenswrapper[4823]: I0126 15:00:28.265859 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-nrpvd" Jan 26 15:00:34 crc kubenswrapper[4823]: I0126 15:00:34.507905 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:00:34 crc kubenswrapper[4823]: I0126 15:00:34.508926 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:00:34 crc kubenswrapper[4823]: I0126 15:00:34.508995 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 15:00:34 crc kubenswrapper[4823]: I0126 15:00:34.509813 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec7366454a163fe04376dec76b6dddb0ad3a342a392aad185b53b45a854cd90d"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:00:34 crc kubenswrapper[4823]: I0126 15:00:34.509899 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://ec7366454a163fe04376dec76b6dddb0ad3a342a392aad185b53b45a854cd90d" gracePeriod=600 Jan 26 15:00:35 crc kubenswrapper[4823]: I0126 15:00:35.436723 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="ec7366454a163fe04376dec76b6dddb0ad3a342a392aad185b53b45a854cd90d" exitCode=0 Jan 26 15:00:35 crc kubenswrapper[4823]: I0126 15:00:35.436782 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"ec7366454a163fe04376dec76b6dddb0ad3a342a392aad185b53b45a854cd90d"} Jan 26 15:00:35 crc kubenswrapper[4823]: I0126 15:00:35.437581 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"ced2fb81c930871220e3d3d613f6291ccb0288d32b2aebbbf2e414d7715540a7"} Jan 26 15:00:35 crc kubenswrapper[4823]: I0126 15:00:35.437612 4823 scope.go:117] "RemoveContainer" containerID="6060074daac6a743fd06d9a3f457f73f68b68b6876078e35864c532ab12df1fb" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.794468 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws"] Jan 26 15:00:41 crc kubenswrapper[4823]: E0126 15:00:41.795546 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" containerName="extract-utilities" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.795565 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" containerName="extract-utilities" Jan 26 15:00:41 crc kubenswrapper[4823]: E0126 15:00:41.795580 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" containerName="extract-content" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.795586 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" containerName="extract-content" Jan 26 15:00:41 crc kubenswrapper[4823]: E0126 15:00:41.795601 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" containerName="registry-server" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.795609 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" containerName="registry-server" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.795713 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5c0b2f-051f-4811-ac92-b1cbfe0c561d" containerName="registry-server" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.796635 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.800473 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.804689 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws"] Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.804918 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b84t4\" (UniqueName: \"kubernetes.io/projected/17e6305e-431a-4ea1-8180-84f01a16d2c2-kube-api-access-b84t4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws\" (UID: \"17e6305e-431a-4ea1-8180-84f01a16d2c2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.805010 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17e6305e-431a-4ea1-8180-84f01a16d2c2-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws\" (UID: \"17e6305e-431a-4ea1-8180-84f01a16d2c2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.805099 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17e6305e-431a-4ea1-8180-84f01a16d2c2-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws\" (UID: \"17e6305e-431a-4ea1-8180-84f01a16d2c2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.906693 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17e6305e-431a-4ea1-8180-84f01a16d2c2-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws\" (UID: \"17e6305e-431a-4ea1-8180-84f01a16d2c2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.906776 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b84t4\" (UniqueName: \"kubernetes.io/projected/17e6305e-431a-4ea1-8180-84f01a16d2c2-kube-api-access-b84t4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws\" (UID: \"17e6305e-431a-4ea1-8180-84f01a16d2c2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.906831 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17e6305e-431a-4ea1-8180-84f01a16d2c2-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws\" (UID: \"17e6305e-431a-4ea1-8180-84f01a16d2c2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.907506 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17e6305e-431a-4ea1-8180-84f01a16d2c2-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws\" (UID: \"17e6305e-431a-4ea1-8180-84f01a16d2c2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.907526 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17e6305e-431a-4ea1-8180-84f01a16d2c2-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws\" (UID: \"17e6305e-431a-4ea1-8180-84f01a16d2c2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:41 crc kubenswrapper[4823]: I0126 15:00:41.933171 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b84t4\" (UniqueName: \"kubernetes.io/projected/17e6305e-431a-4ea1-8180-84f01a16d2c2-kube-api-access-b84t4\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws\" (UID: \"17e6305e-431a-4ea1-8180-84f01a16d2c2\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:42 crc kubenswrapper[4823]: I0126 15:00:42.117212 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:42 crc kubenswrapper[4823]: I0126 15:00:42.335634 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws"] Jan 26 15:00:42 crc kubenswrapper[4823]: I0126 15:00:42.496635 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" event={"ID":"17e6305e-431a-4ea1-8180-84f01a16d2c2","Type":"ContainerStarted","Data":"53a4f0f9214b3428eb661be6b168088010abfa971e39da1fc3939a5486efd583"} Jan 26 15:00:42 crc kubenswrapper[4823]: I0126 15:00:42.497175 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" event={"ID":"17e6305e-431a-4ea1-8180-84f01a16d2c2","Type":"ContainerStarted","Data":"b0e1079dc362fad3a62f0289b0178fb5411c7df154c602ae4485355db4888435"} Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.425382 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-bbxp2" podUID="ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" containerName="console" containerID="cri-o://d4bec3dfc2bfadf6f6187733c8ac5aa0c40933408a656efa55da499be54861d2" gracePeriod=15 Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.505927 4823 generic.go:334] "Generic (PLEG): container finished" podID="17e6305e-431a-4ea1-8180-84f01a16d2c2" containerID="53a4f0f9214b3428eb661be6b168088010abfa971e39da1fc3939a5486efd583" exitCode=0 Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.505993 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" event={"ID":"17e6305e-431a-4ea1-8180-84f01a16d2c2","Type":"ContainerDied","Data":"53a4f0f9214b3428eb661be6b168088010abfa971e39da1fc3939a5486efd583"} Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.818199 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bbxp2_ecfcb396-bdc3-4dcc-98fe-750d1ae4b788/console/0.log" Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.818734 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.934393 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kmzm\" (UniqueName: \"kubernetes.io/projected/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-kube-api-access-8kmzm\") pod \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.934565 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-oauth-config\") pod \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.934693 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-trusted-ca-bundle\") pod \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.934751 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-service-ca\") pod \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.934834 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-oauth-serving-cert\") pod \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.934870 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-serving-cert\") pod \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.934904 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-config\") pod \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\" (UID: \"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788\") " Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.936093 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" (UID: "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.936214 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" (UID: "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.936911 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-service-ca" (OuterVolumeSpecName: "service-ca") pod "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" (UID: "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.936970 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-config" (OuterVolumeSpecName: "console-config") pod "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" (UID: "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.937532 4823 scope.go:117] "RemoveContainer" containerID="d4bec3dfc2bfadf6f6187733c8ac5aa0c40933408a656efa55da499be54861d2" Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.942532 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" (UID: "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.943106 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" (UID: "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:00:43 crc kubenswrapper[4823]: I0126 15:00:43.945582 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-kube-api-access-8kmzm" (OuterVolumeSpecName: "kube-api-access-8kmzm") pod "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" (UID: "ecfcb396-bdc3-4dcc-98fe-750d1ae4b788"). InnerVolumeSpecName "kube-api-access-8kmzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:00:44 crc kubenswrapper[4823]: I0126 15:00:44.037096 4823 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:44 crc kubenswrapper[4823]: I0126 15:00:44.037151 4823 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:44 crc kubenswrapper[4823]: I0126 15:00:44.037173 4823 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:44 crc kubenswrapper[4823]: I0126 15:00:44.037195 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kmzm\" (UniqueName: \"kubernetes.io/projected/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-kube-api-access-8kmzm\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:44 crc kubenswrapper[4823]: I0126 15:00:44.037241 4823 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:44 crc kubenswrapper[4823]: I0126 15:00:44.037260 4823 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:44 crc kubenswrapper[4823]: I0126 15:00:44.037279 4823 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788-service-ca\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:44 crc kubenswrapper[4823]: I0126 15:00:44.516120 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bbxp2" event={"ID":"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788","Type":"ContainerDied","Data":"d4bec3dfc2bfadf6f6187733c8ac5aa0c40933408a656efa55da499be54861d2"} Jan 26 15:00:44 crc kubenswrapper[4823]: I0126 15:00:44.516754 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bbxp2" event={"ID":"ecfcb396-bdc3-4dcc-98fe-750d1ae4b788","Type":"ContainerDied","Data":"3fe2611bb818290d71bdd55ec48c6857d52f238a4d29e8bb57e7b923da372c35"} Jan 26 15:00:44 crc kubenswrapper[4823]: I0126 15:00:44.516194 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bbxp2" Jan 26 15:00:44 crc kubenswrapper[4823]: I0126 15:00:44.562529 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bbxp2"] Jan 26 15:00:44 crc kubenswrapper[4823]: I0126 15:00:44.566903 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-bbxp2"] Jan 26 15:00:45 crc kubenswrapper[4823]: I0126 15:00:45.526875 4823 generic.go:334] "Generic (PLEG): container finished" podID="17e6305e-431a-4ea1-8180-84f01a16d2c2" containerID="f930a5725b014021a63ed0f6a76822eb8ce6996aaae055c3741b874d0a956149" exitCode=0 Jan 26 15:00:45 crc kubenswrapper[4823]: I0126 15:00:45.526981 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" event={"ID":"17e6305e-431a-4ea1-8180-84f01a16d2c2","Type":"ContainerDied","Data":"f930a5725b014021a63ed0f6a76822eb8ce6996aaae055c3741b874d0a956149"} Jan 26 15:00:45 crc kubenswrapper[4823]: I0126 15:00:45.580901 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" path="/var/lib/kubelet/pods/ecfcb396-bdc3-4dcc-98fe-750d1ae4b788/volumes" Jan 26 15:00:46 crc kubenswrapper[4823]: I0126 15:00:46.541543 4823 generic.go:334] "Generic (PLEG): container finished" podID="17e6305e-431a-4ea1-8180-84f01a16d2c2" containerID="3113719b2d4c62f632bef71e7d6cafff2d08e3221798bd6a63a7f72a845c4479" exitCode=0 Jan 26 15:00:46 crc kubenswrapper[4823]: I0126 15:00:46.541639 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" event={"ID":"17e6305e-431a-4ea1-8180-84f01a16d2c2","Type":"ContainerDied","Data":"3113719b2d4c62f632bef71e7d6cafff2d08e3221798bd6a63a7f72a845c4479"} Jan 26 15:00:47 crc kubenswrapper[4823]: I0126 15:00:47.794302 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:47 crc kubenswrapper[4823]: I0126 15:00:47.895590 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17e6305e-431a-4ea1-8180-84f01a16d2c2-bundle\") pod \"17e6305e-431a-4ea1-8180-84f01a16d2c2\" (UID: \"17e6305e-431a-4ea1-8180-84f01a16d2c2\") " Jan 26 15:00:47 crc kubenswrapper[4823]: I0126 15:00:47.895746 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17e6305e-431a-4ea1-8180-84f01a16d2c2-util\") pod \"17e6305e-431a-4ea1-8180-84f01a16d2c2\" (UID: \"17e6305e-431a-4ea1-8180-84f01a16d2c2\") " Jan 26 15:00:47 crc kubenswrapper[4823]: I0126 15:00:47.895872 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b84t4\" (UniqueName: \"kubernetes.io/projected/17e6305e-431a-4ea1-8180-84f01a16d2c2-kube-api-access-b84t4\") pod \"17e6305e-431a-4ea1-8180-84f01a16d2c2\" (UID: \"17e6305e-431a-4ea1-8180-84f01a16d2c2\") " Jan 26 15:00:47 crc kubenswrapper[4823]: I0126 15:00:47.897968 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e6305e-431a-4ea1-8180-84f01a16d2c2-bundle" (OuterVolumeSpecName: "bundle") pod "17e6305e-431a-4ea1-8180-84f01a16d2c2" (UID: "17e6305e-431a-4ea1-8180-84f01a16d2c2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:00:47 crc kubenswrapper[4823]: I0126 15:00:47.904276 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e6305e-431a-4ea1-8180-84f01a16d2c2-kube-api-access-b84t4" (OuterVolumeSpecName: "kube-api-access-b84t4") pod "17e6305e-431a-4ea1-8180-84f01a16d2c2" (UID: "17e6305e-431a-4ea1-8180-84f01a16d2c2"). InnerVolumeSpecName "kube-api-access-b84t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:00:47 crc kubenswrapper[4823]: I0126 15:00:47.918659 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17e6305e-431a-4ea1-8180-84f01a16d2c2-util" (OuterVolumeSpecName: "util") pod "17e6305e-431a-4ea1-8180-84f01a16d2c2" (UID: "17e6305e-431a-4ea1-8180-84f01a16d2c2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:00:47 crc kubenswrapper[4823]: I0126 15:00:47.997888 4823 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/17e6305e-431a-4ea1-8180-84f01a16d2c2-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:47 crc kubenswrapper[4823]: I0126 15:00:47.997952 4823 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/17e6305e-431a-4ea1-8180-84f01a16d2c2-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:47 crc kubenswrapper[4823]: I0126 15:00:47.997972 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b84t4\" (UniqueName: \"kubernetes.io/projected/17e6305e-431a-4ea1-8180-84f01a16d2c2-kube-api-access-b84t4\") on node \"crc\" DevicePath \"\"" Jan 26 15:00:48 crc kubenswrapper[4823]: I0126 15:00:48.558056 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" event={"ID":"17e6305e-431a-4ea1-8180-84f01a16d2c2","Type":"ContainerDied","Data":"b0e1079dc362fad3a62f0289b0178fb5411c7df154c602ae4485355db4888435"} Jan 26 15:00:48 crc kubenswrapper[4823]: I0126 15:00:48.558647 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0e1079dc362fad3a62f0289b0178fb5411c7df154c602ae4485355db4888435" Jan 26 15:00:48 crc kubenswrapper[4823]: I0126 15:00:48.558153 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.060081 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7"] Jan 26 15:00:57 crc kubenswrapper[4823]: E0126 15:00:57.060996 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e6305e-431a-4ea1-8180-84f01a16d2c2" containerName="pull" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.061010 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e6305e-431a-4ea1-8180-84f01a16d2c2" containerName="pull" Jan 26 15:00:57 crc kubenswrapper[4823]: E0126 15:00:57.061023 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e6305e-431a-4ea1-8180-84f01a16d2c2" containerName="util" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.061029 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e6305e-431a-4ea1-8180-84f01a16d2c2" containerName="util" Jan 26 15:00:57 crc kubenswrapper[4823]: E0126 15:00:57.061036 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e6305e-431a-4ea1-8180-84f01a16d2c2" containerName="extract" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.061043 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e6305e-431a-4ea1-8180-84f01a16d2c2" containerName="extract" Jan 26 15:00:57 crc kubenswrapper[4823]: E0126 15:00:57.061054 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" containerName="console" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.061063 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" containerName="console" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.061158 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e6305e-431a-4ea1-8180-84f01a16d2c2" containerName="extract" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.061171 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecfcb396-bdc3-4dcc-98fe-750d1ae4b788" containerName="console" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.061619 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.064479 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.064516 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.065933 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9bfbc" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.068726 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.068996 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.085346 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7"] Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.229121 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/591314cb-abf9-43bb-88a6-7ea227f99818-apiservice-cert\") pod \"metallb-operator-controller-manager-754dfc8bcc-pmxg7\" (UID: \"591314cb-abf9-43bb-88a6-7ea227f99818\") " pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.229419 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64pkb\" (UniqueName: \"kubernetes.io/projected/591314cb-abf9-43bb-88a6-7ea227f99818-kube-api-access-64pkb\") pod \"metallb-operator-controller-manager-754dfc8bcc-pmxg7\" (UID: \"591314cb-abf9-43bb-88a6-7ea227f99818\") " pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.229544 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/591314cb-abf9-43bb-88a6-7ea227f99818-webhook-cert\") pod \"metallb-operator-controller-manager-754dfc8bcc-pmxg7\" (UID: \"591314cb-abf9-43bb-88a6-7ea227f99818\") " pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.331571 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64pkb\" (UniqueName: \"kubernetes.io/projected/591314cb-abf9-43bb-88a6-7ea227f99818-kube-api-access-64pkb\") pod \"metallb-operator-controller-manager-754dfc8bcc-pmxg7\" (UID: \"591314cb-abf9-43bb-88a6-7ea227f99818\") " pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.331639 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/591314cb-abf9-43bb-88a6-7ea227f99818-webhook-cert\") pod \"metallb-operator-controller-manager-754dfc8bcc-pmxg7\" (UID: \"591314cb-abf9-43bb-88a6-7ea227f99818\") " pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.331691 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/591314cb-abf9-43bb-88a6-7ea227f99818-apiservice-cert\") pod \"metallb-operator-controller-manager-754dfc8bcc-pmxg7\" (UID: \"591314cb-abf9-43bb-88a6-7ea227f99818\") " pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.340590 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/591314cb-abf9-43bb-88a6-7ea227f99818-webhook-cert\") pod \"metallb-operator-controller-manager-754dfc8bcc-pmxg7\" (UID: \"591314cb-abf9-43bb-88a6-7ea227f99818\") " pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.348490 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/591314cb-abf9-43bb-88a6-7ea227f99818-apiservice-cert\") pod \"metallb-operator-controller-manager-754dfc8bcc-pmxg7\" (UID: \"591314cb-abf9-43bb-88a6-7ea227f99818\") " pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.373631 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64pkb\" (UniqueName: \"kubernetes.io/projected/591314cb-abf9-43bb-88a6-7ea227f99818-kube-api-access-64pkb\") pod \"metallb-operator-controller-manager-754dfc8bcc-pmxg7\" (UID: \"591314cb-abf9-43bb-88a6-7ea227f99818\") " pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.382299 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.413768 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk"] Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.414962 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.419272 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.419785 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-ljn4w" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.421583 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.441691 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk"] Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.535517 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec-webhook-cert\") pod \"metallb-operator-webhook-server-5f6c667744-wvxxk\" (UID: \"6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec\") " pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.536082 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec-apiservice-cert\") pod \"metallb-operator-webhook-server-5f6c667744-wvxxk\" (UID: \"6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec\") " pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.536150 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2m62\" (UniqueName: \"kubernetes.io/projected/6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec-kube-api-access-n2m62\") pod \"metallb-operator-webhook-server-5f6c667744-wvxxk\" (UID: \"6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec\") " pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.637793 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec-webhook-cert\") pod \"metallb-operator-webhook-server-5f6c667744-wvxxk\" (UID: \"6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec\") " pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.637874 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec-apiservice-cert\") pod \"metallb-operator-webhook-server-5f6c667744-wvxxk\" (UID: \"6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec\") " pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.637945 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2m62\" (UniqueName: \"kubernetes.io/projected/6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec-kube-api-access-n2m62\") pod \"metallb-operator-webhook-server-5f6c667744-wvxxk\" (UID: \"6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec\") " pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.657628 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec-webhook-cert\") pod \"metallb-operator-webhook-server-5f6c667744-wvxxk\" (UID: \"6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec\") " pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.658321 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec-apiservice-cert\") pod \"metallb-operator-webhook-server-5f6c667744-wvxxk\" (UID: \"6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec\") " pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.662409 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2m62\" (UniqueName: \"kubernetes.io/projected/6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec-kube-api-access-n2m62\") pod \"metallb-operator-webhook-server-5f6c667744-wvxxk\" (UID: \"6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec\") " pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.670327 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7"] Jan 26 15:00:57 crc kubenswrapper[4823]: I0126 15:00:57.767213 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:00:58 crc kubenswrapper[4823]: I0126 15:00:58.021301 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk"] Jan 26 15:00:58 crc kubenswrapper[4823]: W0126 15:00:58.032715 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b5a60ad_91b4_4a1d_8f5d_5208b533d8ec.slice/crio-e4ce9683a7557e447126b589b50be40cc1a82a478ed0e51c10f007f6b711302d WatchSource:0}: Error finding container e4ce9683a7557e447126b589b50be40cc1a82a478ed0e51c10f007f6b711302d: Status 404 returned error can't find the container with id e4ce9683a7557e447126b589b50be40cc1a82a478ed0e51c10f007f6b711302d Jan 26 15:00:58 crc kubenswrapper[4823]: I0126 15:00:58.619634 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" event={"ID":"591314cb-abf9-43bb-88a6-7ea227f99818","Type":"ContainerStarted","Data":"9482ee127f10d19053bc8ec74396048e34ff532d58f817f5e5ed653b7a8b0f8c"} Jan 26 15:00:58 crc kubenswrapper[4823]: I0126 15:00:58.621199 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" event={"ID":"6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec","Type":"ContainerStarted","Data":"e4ce9683a7557e447126b589b50be40cc1a82a478ed0e51c10f007f6b711302d"} Jan 26 15:01:08 crc kubenswrapper[4823]: I0126 15:01:08.803018 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" event={"ID":"591314cb-abf9-43bb-88a6-7ea227f99818","Type":"ContainerStarted","Data":"1855e42d1a7099176818d80f912ea2ce4b2b509f3d027c0ad585f8b8b1fb1d92"} Jan 26 15:01:08 crc kubenswrapper[4823]: I0126 15:01:08.803627 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:01:08 crc kubenswrapper[4823]: I0126 15:01:08.806980 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" event={"ID":"6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec","Type":"ContainerStarted","Data":"e5ee60bf543b60a41ff674cb046543df5c28be32e5187b06fbce2d1fc5c82464"} Jan 26 15:01:08 crc kubenswrapper[4823]: I0126 15:01:08.807168 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:01:08 crc kubenswrapper[4823]: I0126 15:01:08.831225 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" podStartSLOduration=1.7428196200000001 podStartE2EDuration="11.831205226s" podCreationTimestamp="2026-01-26 15:00:57 +0000 UTC" firstStartedPulling="2026-01-26 15:00:57.690056224 +0000 UTC m=+854.375519329" lastFinishedPulling="2026-01-26 15:01:07.77844183 +0000 UTC m=+864.463904935" observedRunningTime="2026-01-26 15:01:08.828155713 +0000 UTC m=+865.513618838" watchObservedRunningTime="2026-01-26 15:01:08.831205226 +0000 UTC m=+865.516668331" Jan 26 15:01:08 crc kubenswrapper[4823]: I0126 15:01:08.851676 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" podStartSLOduration=2.093296999 podStartE2EDuration="11.851658833s" podCreationTimestamp="2026-01-26 15:00:57 +0000 UTC" firstStartedPulling="2026-01-26 15:00:58.035776242 +0000 UTC m=+854.721239347" lastFinishedPulling="2026-01-26 15:01:07.794138076 +0000 UTC m=+864.479601181" observedRunningTime="2026-01-26 15:01:08.84934149 +0000 UTC m=+865.534804595" watchObservedRunningTime="2026-01-26 15:01:08.851658833 +0000 UTC m=+865.537121938" Jan 26 15:01:17 crc kubenswrapper[4823]: I0126 15:01:17.773099 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" Jan 26 15:01:37 crc kubenswrapper[4823]: I0126 15:01:37.385510 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-754dfc8bcc-pmxg7" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.192769 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-spxlq"] Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.195058 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.200528 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.204393 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-chd9j" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.205530 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.206238 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr"] Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.207021 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.208978 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.235343 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr"] Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.294549 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ba560662-8eb2-4812-86a4-bf963eb97bf0-frr-sockets\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.294596 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba560662-8eb2-4812-86a4-bf963eb97bf0-metrics-certs\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.294624 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ba560662-8eb2-4812-86a4-bf963eb97bf0-metrics\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.294655 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90ae223a-8f0d-43c4-afb1-b6de69aebef6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bjdsr\" (UID: \"90ae223a-8f0d-43c4-afb1-b6de69aebef6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.294674 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb88r\" (UniqueName: \"kubernetes.io/projected/90ae223a-8f0d-43c4-afb1-b6de69aebef6-kube-api-access-xb88r\") pod \"frr-k8s-webhook-server-7df86c4f6c-bjdsr\" (UID: \"90ae223a-8f0d-43c4-afb1-b6de69aebef6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.294703 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ba560662-8eb2-4812-86a4-bf963eb97bf0-reloader\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.294720 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94s72\" (UniqueName: \"kubernetes.io/projected/ba560662-8eb2-4812-86a4-bf963eb97bf0-kube-api-access-94s72\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.294740 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ba560662-8eb2-4812-86a4-bf963eb97bf0-frr-startup\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.294777 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ba560662-8eb2-4812-86a4-bf963eb97bf0-frr-conf\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.395848 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ba560662-8eb2-4812-86a4-bf963eb97bf0-frr-sockets\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.396985 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba560662-8eb2-4812-86a4-bf963eb97bf0-metrics-certs\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.397074 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ba560662-8eb2-4812-86a4-bf963eb97bf0-metrics\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.397172 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90ae223a-8f0d-43c4-afb1-b6de69aebef6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bjdsr\" (UID: \"90ae223a-8f0d-43c4-afb1-b6de69aebef6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.397255 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb88r\" (UniqueName: \"kubernetes.io/projected/90ae223a-8f0d-43c4-afb1-b6de69aebef6-kube-api-access-xb88r\") pod \"frr-k8s-webhook-server-7df86c4f6c-bjdsr\" (UID: \"90ae223a-8f0d-43c4-afb1-b6de69aebef6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.397542 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ba560662-8eb2-4812-86a4-bf963eb97bf0-reloader\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.397622 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94s72\" (UniqueName: \"kubernetes.io/projected/ba560662-8eb2-4812-86a4-bf963eb97bf0-kube-api-access-94s72\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.397710 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ba560662-8eb2-4812-86a4-bf963eb97bf0-frr-startup\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.397819 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ba560662-8eb2-4812-86a4-bf963eb97bf0-frr-conf\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.398197 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ba560662-8eb2-4812-86a4-bf963eb97bf0-frr-conf\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.396430 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ba560662-8eb2-4812-86a4-bf963eb97bf0-frr-sockets\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.399885 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ba560662-8eb2-4812-86a4-bf963eb97bf0-metrics\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: E0126 15:01:38.400023 4823 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 26 15:01:38 crc kubenswrapper[4823]: E0126 15:01:38.400079 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/90ae223a-8f0d-43c4-afb1-b6de69aebef6-cert podName:90ae223a-8f0d-43c4-afb1-b6de69aebef6 nodeName:}" failed. No retries permitted until 2026-01-26 15:01:38.900059105 +0000 UTC m=+895.585522410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/90ae223a-8f0d-43c4-afb1-b6de69aebef6-cert") pod "frr-k8s-webhook-server-7df86c4f6c-bjdsr" (UID: "90ae223a-8f0d-43c4-afb1-b6de69aebef6") : secret "frr-k8s-webhook-server-cert" not found Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.400658 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ba560662-8eb2-4812-86a4-bf963eb97bf0-reloader\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.401348 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ba560662-8eb2-4812-86a4-bf963eb97bf0-frr-startup\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.406016 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ba560662-8eb2-4812-86a4-bf963eb97bf0-metrics-certs\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.420570 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94s72\" (UniqueName: \"kubernetes.io/projected/ba560662-8eb2-4812-86a4-bf963eb97bf0-kube-api-access-94s72\") pod \"frr-k8s-spxlq\" (UID: \"ba560662-8eb2-4812-86a4-bf963eb97bf0\") " pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.470862 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb88r\" (UniqueName: \"kubernetes.io/projected/90ae223a-8f0d-43c4-afb1-b6de69aebef6-kube-api-access-xb88r\") pod \"frr-k8s-webhook-server-7df86c4f6c-bjdsr\" (UID: \"90ae223a-8f0d-43c4-afb1-b6de69aebef6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.544115 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.607141 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lglsz"] Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.608353 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lglsz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.616017 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-c9bvz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.617310 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.617543 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.623188 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.637676 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-894zb"] Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.638764 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.642083 4823 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.653567 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-894zb"] Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.747585 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ljz8\" (UniqueName: \"kubernetes.io/projected/77e0c22b-572f-4c51-bb37-158f84671365-kube-api-access-7ljz8\") pod \"controller-6968d8fdc4-894zb\" (UID: \"77e0c22b-572f-4c51-bb37-158f84671365\") " pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.747656 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bm2m\" (UniqueName: \"kubernetes.io/projected/e883d469-e238-4d04-958a-6b4d2b0ae8be-kube-api-access-9bm2m\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.748586 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e883d469-e238-4d04-958a-6b4d2b0ae8be-metallb-excludel2\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.748660 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e0c22b-572f-4c51-bb37-158f84671365-metrics-certs\") pod \"controller-6968d8fdc4-894zb\" (UID: \"77e0c22b-572f-4c51-bb37-158f84671365\") " pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.748700 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77e0c22b-572f-4c51-bb37-158f84671365-cert\") pod \"controller-6968d8fdc4-894zb\" (UID: \"77e0c22b-572f-4c51-bb37-158f84671365\") " pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.748720 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e883d469-e238-4d04-958a-6b4d2b0ae8be-memberlist\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.748779 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e883d469-e238-4d04-958a-6b4d2b0ae8be-metrics-certs\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.850365 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77e0c22b-572f-4c51-bb37-158f84671365-cert\") pod \"controller-6968d8fdc4-894zb\" (UID: \"77e0c22b-572f-4c51-bb37-158f84671365\") " pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.850948 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e883d469-e238-4d04-958a-6b4d2b0ae8be-memberlist\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.850995 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e883d469-e238-4d04-958a-6b4d2b0ae8be-metrics-certs\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.851040 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ljz8\" (UniqueName: \"kubernetes.io/projected/77e0c22b-572f-4c51-bb37-158f84671365-kube-api-access-7ljz8\") pod \"controller-6968d8fdc4-894zb\" (UID: \"77e0c22b-572f-4c51-bb37-158f84671365\") " pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.851057 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bm2m\" (UniqueName: \"kubernetes.io/projected/e883d469-e238-4d04-958a-6b4d2b0ae8be-kube-api-access-9bm2m\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.851114 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e883d469-e238-4d04-958a-6b4d2b0ae8be-metallb-excludel2\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.851143 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e0c22b-572f-4c51-bb37-158f84671365-metrics-certs\") pod \"controller-6968d8fdc4-894zb\" (UID: \"77e0c22b-572f-4c51-bb37-158f84671365\") " pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:38 crc kubenswrapper[4823]: E0126 15:01:38.851670 4823 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 15:01:38 crc kubenswrapper[4823]: E0126 15:01:38.851778 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e883d469-e238-4d04-958a-6b4d2b0ae8be-memberlist podName:e883d469-e238-4d04-958a-6b4d2b0ae8be nodeName:}" failed. No retries permitted until 2026-01-26 15:01:39.351754136 +0000 UTC m=+896.037217241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e883d469-e238-4d04-958a-6b4d2b0ae8be-memberlist") pod "speaker-lglsz" (UID: "e883d469-e238-4d04-958a-6b4d2b0ae8be") : secret "metallb-memberlist" not found Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.852853 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/e883d469-e238-4d04-958a-6b4d2b0ae8be-metallb-excludel2\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.855150 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/77e0c22b-572f-4c51-bb37-158f84671365-cert\") pod \"controller-6968d8fdc4-894zb\" (UID: \"77e0c22b-572f-4c51-bb37-158f84671365\") " pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.855725 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77e0c22b-572f-4c51-bb37-158f84671365-metrics-certs\") pod \"controller-6968d8fdc4-894zb\" (UID: \"77e0c22b-572f-4c51-bb37-158f84671365\") " pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.856118 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e883d469-e238-4d04-958a-6b4d2b0ae8be-metrics-certs\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.869113 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ljz8\" (UniqueName: \"kubernetes.io/projected/77e0c22b-572f-4c51-bb37-158f84671365-kube-api-access-7ljz8\") pod \"controller-6968d8fdc4-894zb\" (UID: \"77e0c22b-572f-4c51-bb37-158f84671365\") " pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.870657 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bm2m\" (UniqueName: \"kubernetes.io/projected/e883d469-e238-4d04-958a-6b4d2b0ae8be-kube-api-access-9bm2m\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.952742 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90ae223a-8f0d-43c4-afb1-b6de69aebef6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bjdsr\" (UID: \"90ae223a-8f0d-43c4-afb1-b6de69aebef6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.956900 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/90ae223a-8f0d-43c4-afb1-b6de69aebef6-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-bjdsr\" (UID: \"90ae223a-8f0d-43c4-afb1-b6de69aebef6\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" Jan 26 15:01:38 crc kubenswrapper[4823]: I0126 15:01:38.966123 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:39 crc kubenswrapper[4823]: I0126 15:01:39.055353 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-spxlq" event={"ID":"ba560662-8eb2-4812-86a4-bf963eb97bf0","Type":"ContainerStarted","Data":"1551d0ca7796749184f5886200f2d04e40685587adfdbc7b13a1668a438af1e1"} Jan 26 15:01:39 crc kubenswrapper[4823]: I0126 15:01:39.122526 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" Jan 26 15:01:39 crc kubenswrapper[4823]: I0126 15:01:39.358243 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e883d469-e238-4d04-958a-6b4d2b0ae8be-memberlist\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:39 crc kubenswrapper[4823]: E0126 15:01:39.358511 4823 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 26 15:01:39 crc kubenswrapper[4823]: E0126 15:01:39.358744 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e883d469-e238-4d04-958a-6b4d2b0ae8be-memberlist podName:e883d469-e238-4d04-958a-6b4d2b0ae8be nodeName:}" failed. No retries permitted until 2026-01-26 15:01:40.358724321 +0000 UTC m=+897.044187426 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/e883d469-e238-4d04-958a-6b4d2b0ae8be-memberlist") pod "speaker-lglsz" (UID: "e883d469-e238-4d04-958a-6b4d2b0ae8be") : secret "metallb-memberlist" not found Jan 26 15:01:39 crc kubenswrapper[4823]: I0126 15:01:39.420596 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr"] Jan 26 15:01:39 crc kubenswrapper[4823]: W0126 15:01:39.429280 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90ae223a_8f0d_43c4_afb1_b6de69aebef6.slice/crio-462c2fa6307d1ffbc6e9fe90a784015fe34356983ed07fa6d156680ebf407cca WatchSource:0}: Error finding container 462c2fa6307d1ffbc6e9fe90a784015fe34356983ed07fa6d156680ebf407cca: Status 404 returned error can't find the container with id 462c2fa6307d1ffbc6e9fe90a784015fe34356983ed07fa6d156680ebf407cca Jan 26 15:01:39 crc kubenswrapper[4823]: I0126 15:01:39.452485 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-894zb"] Jan 26 15:01:39 crc kubenswrapper[4823]: W0126 15:01:39.466570 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77e0c22b_572f_4c51_bb37_158f84671365.slice/crio-6f10f01b09838a1b123fd974e3adb4f7eeeba4ff9efa0878b713cf8fe5418c4f WatchSource:0}: Error finding container 6f10f01b09838a1b123fd974e3adb4f7eeeba4ff9efa0878b713cf8fe5418c4f: Status 404 returned error can't find the container with id 6f10f01b09838a1b123fd974e3adb4f7eeeba4ff9efa0878b713cf8fe5418c4f Jan 26 15:01:40 crc kubenswrapper[4823]: I0126 15:01:40.064836 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-894zb" event={"ID":"77e0c22b-572f-4c51-bb37-158f84671365","Type":"ContainerStarted","Data":"a87930a701ef576ee1352a58c1024b5495b570509a82f5c6fde1091f26918674"} Jan 26 15:01:40 crc kubenswrapper[4823]: I0126 15:01:40.065224 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-894zb" event={"ID":"77e0c22b-572f-4c51-bb37-158f84671365","Type":"ContainerStarted","Data":"04dd7d2fce77a56c5bc667142d433ab5621c5dc21251c6e40711409ab4b0fe1e"} Jan 26 15:01:40 crc kubenswrapper[4823]: I0126 15:01:40.065237 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-894zb" event={"ID":"77e0c22b-572f-4c51-bb37-158f84671365","Type":"ContainerStarted","Data":"6f10f01b09838a1b123fd974e3adb4f7eeeba4ff9efa0878b713cf8fe5418c4f"} Jan 26 15:01:40 crc kubenswrapper[4823]: I0126 15:01:40.065252 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:40 crc kubenswrapper[4823]: I0126 15:01:40.066556 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" event={"ID":"90ae223a-8f0d-43c4-afb1-b6de69aebef6","Type":"ContainerStarted","Data":"462c2fa6307d1ffbc6e9fe90a784015fe34356983ed07fa6d156680ebf407cca"} Jan 26 15:01:40 crc kubenswrapper[4823]: I0126 15:01:40.084431 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-894zb" podStartSLOduration=2.084412818 podStartE2EDuration="2.084412818s" podCreationTimestamp="2026-01-26 15:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:01:40.08049948 +0000 UTC m=+896.765962585" watchObservedRunningTime="2026-01-26 15:01:40.084412818 +0000 UTC m=+896.769875923" Jan 26 15:01:40 crc kubenswrapper[4823]: I0126 15:01:40.375055 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e883d469-e238-4d04-958a-6b4d2b0ae8be-memberlist\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:40 crc kubenswrapper[4823]: I0126 15:01:40.386245 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/e883d469-e238-4d04-958a-6b4d2b0ae8be-memberlist\") pod \"speaker-lglsz\" (UID: \"e883d469-e238-4d04-958a-6b4d2b0ae8be\") " pod="metallb-system/speaker-lglsz" Jan 26 15:01:40 crc kubenswrapper[4823]: I0126 15:01:40.459220 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lglsz" Jan 26 15:01:40 crc kubenswrapper[4823]: W0126 15:01:40.507603 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode883d469_e238_4d04_958a_6b4d2b0ae8be.slice/crio-c21d16d9d471a7060f4f7a7c3406e4f1ea6899c39191e1576936b91d87e91f38 WatchSource:0}: Error finding container c21d16d9d471a7060f4f7a7c3406e4f1ea6899c39191e1576936b91d87e91f38: Status 404 returned error can't find the container with id c21d16d9d471a7060f4f7a7c3406e4f1ea6899c39191e1576936b91d87e91f38 Jan 26 15:01:41 crc kubenswrapper[4823]: I0126 15:01:41.075148 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lglsz" event={"ID":"e883d469-e238-4d04-958a-6b4d2b0ae8be","Type":"ContainerStarted","Data":"a640d262939c392acebae8c77e84f87dcc76dac5f888534c74aea260ea5a15bc"} Jan 26 15:01:41 crc kubenswrapper[4823]: I0126 15:01:41.075526 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lglsz" event={"ID":"e883d469-e238-4d04-958a-6b4d2b0ae8be","Type":"ContainerStarted","Data":"c21d16d9d471a7060f4f7a7c3406e4f1ea6899c39191e1576936b91d87e91f38"} Jan 26 15:01:42 crc kubenswrapper[4823]: I0126 15:01:42.115543 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lglsz" event={"ID":"e883d469-e238-4d04-958a-6b4d2b0ae8be","Type":"ContainerStarted","Data":"714549aea6ec9584575b4e112671529b83b5d16674e43b077924462b3d10038f"} Jan 26 15:01:42 crc kubenswrapper[4823]: I0126 15:01:42.115746 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lglsz" Jan 26 15:01:42 crc kubenswrapper[4823]: I0126 15:01:42.169948 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lglsz" podStartSLOduration=4.169930196 podStartE2EDuration="4.169930196s" podCreationTimestamp="2026-01-26 15:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:01:42.165441304 +0000 UTC m=+898.850904419" watchObservedRunningTime="2026-01-26 15:01:42.169930196 +0000 UTC m=+898.855393301" Jan 26 15:01:50 crc kubenswrapper[4823]: I0126 15:01:50.197296 4823 generic.go:334] "Generic (PLEG): container finished" podID="ba560662-8eb2-4812-86a4-bf963eb97bf0" containerID="fcdfc27d01e44681ad7ac5c9f0232cf22dcca8fd478825a7bcf1a70d00c711dd" exitCode=0 Jan 26 15:01:50 crc kubenswrapper[4823]: I0126 15:01:50.197429 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-spxlq" event={"ID":"ba560662-8eb2-4812-86a4-bf963eb97bf0","Type":"ContainerDied","Data":"fcdfc27d01e44681ad7ac5c9f0232cf22dcca8fd478825a7bcf1a70d00c711dd"} Jan 26 15:01:50 crc kubenswrapper[4823]: I0126 15:01:50.201603 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" event={"ID":"90ae223a-8f0d-43c4-afb1-b6de69aebef6","Type":"ContainerStarted","Data":"8f956c3bd85d2c2437f678c6b18ec1301add3cd33033b0d2a6c0996e32d89751"} Jan 26 15:01:50 crc kubenswrapper[4823]: I0126 15:01:50.202419 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" Jan 26 15:01:50 crc kubenswrapper[4823]: I0126 15:01:50.248907 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" podStartSLOduration=2.17274351 podStartE2EDuration="12.248885392s" podCreationTimestamp="2026-01-26 15:01:38 +0000 UTC" firstStartedPulling="2026-01-26 15:01:39.433194197 +0000 UTC m=+896.118657302" lastFinishedPulling="2026-01-26 15:01:49.509336079 +0000 UTC m=+906.194799184" observedRunningTime="2026-01-26 15:01:50.246877647 +0000 UTC m=+906.932340752" watchObservedRunningTime="2026-01-26 15:01:50.248885392 +0000 UTC m=+906.934348497" Jan 26 15:01:50 crc kubenswrapper[4823]: I0126 15:01:50.464193 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lglsz" Jan 26 15:01:51 crc kubenswrapper[4823]: I0126 15:01:51.212680 4823 generic.go:334] "Generic (PLEG): container finished" podID="ba560662-8eb2-4812-86a4-bf963eb97bf0" containerID="4d32d18ff32e922945f76e30e37940a58763d87930412dcc25cedca6743d75d9" exitCode=0 Jan 26 15:01:51 crc kubenswrapper[4823]: I0126 15:01:51.212759 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-spxlq" event={"ID":"ba560662-8eb2-4812-86a4-bf963eb97bf0","Type":"ContainerDied","Data":"4d32d18ff32e922945f76e30e37940a58763d87930412dcc25cedca6743d75d9"} Jan 26 15:01:52 crc kubenswrapper[4823]: I0126 15:01:52.238340 4823 generic.go:334] "Generic (PLEG): container finished" podID="ba560662-8eb2-4812-86a4-bf963eb97bf0" containerID="70e8aac7be3713945d617757a8b7d1ba6deed6f572dfd4beaa5fa8eef0e5bc1a" exitCode=0 Jan 26 15:01:52 crc kubenswrapper[4823]: I0126 15:01:52.238498 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-spxlq" event={"ID":"ba560662-8eb2-4812-86a4-bf963eb97bf0","Type":"ContainerDied","Data":"70e8aac7be3713945d617757a8b7d1ba6deed6f572dfd4beaa5fa8eef0e5bc1a"} Jan 26 15:01:53 crc kubenswrapper[4823]: I0126 15:01:53.248557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-spxlq" event={"ID":"ba560662-8eb2-4812-86a4-bf963eb97bf0","Type":"ContainerStarted","Data":"40031f1f430f4dfd3f87403e80c2326f30730f808fd43b6c393ec180da3f3883"} Jan 26 15:01:53 crc kubenswrapper[4823]: I0126 15:01:53.248961 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-spxlq" event={"ID":"ba560662-8eb2-4812-86a4-bf963eb97bf0","Type":"ContainerStarted","Data":"63fd9dc08175b933703a547bc2a97f5606141c9a8f7f3bb5b0d2983d13a28fbd"} Jan 26 15:01:53 crc kubenswrapper[4823]: I0126 15:01:53.249004 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-spxlq" event={"ID":"ba560662-8eb2-4812-86a4-bf963eb97bf0","Type":"ContainerStarted","Data":"907df06cd2c480c8a13d765e730c8b47bc7d8eb87ebe18df635d4ac596f933ef"} Jan 26 15:01:53 crc kubenswrapper[4823]: I0126 15:01:53.249016 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-spxlq" event={"ID":"ba560662-8eb2-4812-86a4-bf963eb97bf0","Type":"ContainerStarted","Data":"3725865e62aede62253b51f1657b3a6478bdb81e3f99121065500c2e90f288f9"} Jan 26 15:01:53 crc kubenswrapper[4823]: I0126 15:01:53.249027 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-spxlq" event={"ID":"ba560662-8eb2-4812-86a4-bf963eb97bf0","Type":"ContainerStarted","Data":"ee854ea431d0f550cd32e9f75c7cc8edd88137ec8bf226b016da247fda6c1f67"} Jan 26 15:01:53 crc kubenswrapper[4823]: I0126 15:01:53.871178 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gc68h"] Jan 26 15:01:53 crc kubenswrapper[4823]: I0126 15:01:53.872095 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gc68h" Jan 26 15:01:53 crc kubenswrapper[4823]: I0126 15:01:53.874516 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 26 15:01:53 crc kubenswrapper[4823]: I0126 15:01:53.874873 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 26 15:01:53 crc kubenswrapper[4823]: I0126 15:01:53.875145 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-wl4sg" Jan 26 15:01:53 crc kubenswrapper[4823]: I0126 15:01:53.884298 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gc68h"] Jan 26 15:01:54 crc kubenswrapper[4823]: I0126 15:01:54.013490 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7cjn\" (UniqueName: \"kubernetes.io/projected/8e09aaea-d05d-44ea-99cc-d01284c8bf35-kube-api-access-w7cjn\") pod \"openstack-operator-index-gc68h\" (UID: \"8e09aaea-d05d-44ea-99cc-d01284c8bf35\") " pod="openstack-operators/openstack-operator-index-gc68h" Jan 26 15:01:54 crc kubenswrapper[4823]: I0126 15:01:54.115544 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7cjn\" (UniqueName: \"kubernetes.io/projected/8e09aaea-d05d-44ea-99cc-d01284c8bf35-kube-api-access-w7cjn\") pod \"openstack-operator-index-gc68h\" (UID: \"8e09aaea-d05d-44ea-99cc-d01284c8bf35\") " pod="openstack-operators/openstack-operator-index-gc68h" Jan 26 15:01:54 crc kubenswrapper[4823]: I0126 15:01:54.136189 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7cjn\" (UniqueName: \"kubernetes.io/projected/8e09aaea-d05d-44ea-99cc-d01284c8bf35-kube-api-access-w7cjn\") pod \"openstack-operator-index-gc68h\" (UID: \"8e09aaea-d05d-44ea-99cc-d01284c8bf35\") " pod="openstack-operators/openstack-operator-index-gc68h" Jan 26 15:01:54 crc kubenswrapper[4823]: I0126 15:01:54.195809 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gc68h" Jan 26 15:01:54 crc kubenswrapper[4823]: I0126 15:01:54.262437 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-spxlq" event={"ID":"ba560662-8eb2-4812-86a4-bf963eb97bf0","Type":"ContainerStarted","Data":"55d69cff0427e7a351e9725463069c5a0a6e1fd376e8b42a82b7bf12508c205a"} Jan 26 15:01:54 crc kubenswrapper[4823]: I0126 15:01:54.263693 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:54 crc kubenswrapper[4823]: I0126 15:01:54.292591 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-spxlq" podStartSLOduration=5.52903785 podStartE2EDuration="16.292563946s" podCreationTimestamp="2026-01-26 15:01:38 +0000 UTC" firstStartedPulling="2026-01-26 15:01:38.720618887 +0000 UTC m=+895.406081982" lastFinishedPulling="2026-01-26 15:01:49.484144973 +0000 UTC m=+906.169608078" observedRunningTime="2026-01-26 15:01:54.287679203 +0000 UTC m=+910.973142318" watchObservedRunningTime="2026-01-26 15:01:54.292563946 +0000 UTC m=+910.978027051" Jan 26 15:01:54 crc kubenswrapper[4823]: I0126 15:01:54.462738 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gc68h"] Jan 26 15:01:55 crc kubenswrapper[4823]: I0126 15:01:55.279555 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gc68h" event={"ID":"8e09aaea-d05d-44ea-99cc-d01284c8bf35","Type":"ContainerStarted","Data":"38ac5a9d0bf098907d94611e0e9ffe6a25ede473ab16bd04f4a439d746aa5659"} Jan 26 15:01:57 crc kubenswrapper[4823]: I0126 15:01:57.248485 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gc68h"] Jan 26 15:01:57 crc kubenswrapper[4823]: I0126 15:01:57.295514 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gc68h" event={"ID":"8e09aaea-d05d-44ea-99cc-d01284c8bf35","Type":"ContainerStarted","Data":"f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10"} Jan 26 15:01:57 crc kubenswrapper[4823]: I0126 15:01:57.315545 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gc68h" podStartSLOduration=2.282808852 podStartE2EDuration="4.315517474s" podCreationTimestamp="2026-01-26 15:01:53 +0000 UTC" firstStartedPulling="2026-01-26 15:01:54.477424396 +0000 UTC m=+911.162887511" lastFinishedPulling="2026-01-26 15:01:56.510133028 +0000 UTC m=+913.195596133" observedRunningTime="2026-01-26 15:01:57.310493097 +0000 UTC m=+913.995956222" watchObservedRunningTime="2026-01-26 15:01:57.315517474 +0000 UTC m=+914.000980579" Jan 26 15:01:57 crc kubenswrapper[4823]: I0126 15:01:57.861243 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qvd8b"] Jan 26 15:01:57 crc kubenswrapper[4823]: I0126 15:01:57.862667 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qvd8b" Jan 26 15:01:57 crc kubenswrapper[4823]: I0126 15:01:57.888118 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qvd8b"] Jan 26 15:01:58 crc kubenswrapper[4823]: I0126 15:01:58.044706 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8nk\" (UniqueName: \"kubernetes.io/projected/b014fc7e-587f-402b-adb2-2be3c1911e15-kube-api-access-gn8nk\") pod \"openstack-operator-index-qvd8b\" (UID: \"b014fc7e-587f-402b-adb2-2be3c1911e15\") " pod="openstack-operators/openstack-operator-index-qvd8b" Jan 26 15:01:58 crc kubenswrapper[4823]: I0126 15:01:58.145797 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8nk\" (UniqueName: \"kubernetes.io/projected/b014fc7e-587f-402b-adb2-2be3c1911e15-kube-api-access-gn8nk\") pod \"openstack-operator-index-qvd8b\" (UID: \"b014fc7e-587f-402b-adb2-2be3c1911e15\") " pod="openstack-operators/openstack-operator-index-qvd8b" Jan 26 15:01:58 crc kubenswrapper[4823]: I0126 15:01:58.182765 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8nk\" (UniqueName: \"kubernetes.io/projected/b014fc7e-587f-402b-adb2-2be3c1911e15-kube-api-access-gn8nk\") pod \"openstack-operator-index-qvd8b\" (UID: \"b014fc7e-587f-402b-adb2-2be3c1911e15\") " pod="openstack-operators/openstack-operator-index-qvd8b" Jan 26 15:01:58 crc kubenswrapper[4823]: I0126 15:01:58.195832 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qvd8b" Jan 26 15:01:58 crc kubenswrapper[4823]: I0126 15:01:58.305693 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-gc68h" podUID="8e09aaea-d05d-44ea-99cc-d01284c8bf35" containerName="registry-server" containerID="cri-o://f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10" gracePeriod=2 Jan 26 15:01:58 crc kubenswrapper[4823]: I0126 15:01:58.545580 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:58 crc kubenswrapper[4823]: I0126 15:01:58.593176 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-spxlq" Jan 26 15:01:58 crc kubenswrapper[4823]: I0126 15:01:58.808950 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qvd8b"] Jan 26 15:01:58 crc kubenswrapper[4823]: I0126 15:01:58.975441 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-894zb" Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.132585 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-bjdsr" Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.275088 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gc68h" Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.317712 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qvd8b" event={"ID":"b014fc7e-587f-402b-adb2-2be3c1911e15","Type":"ContainerStarted","Data":"351c625622f7e2becb752397273fc89e8e84d7742604646dedf5979c08396fad"} Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.317791 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qvd8b" event={"ID":"b014fc7e-587f-402b-adb2-2be3c1911e15","Type":"ContainerStarted","Data":"20af2f005023ea33bbac34a529566f9779529b51f6465e5d75e749fdb762dbab"} Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.319889 4823 generic.go:334] "Generic (PLEG): container finished" podID="8e09aaea-d05d-44ea-99cc-d01284c8bf35" containerID="f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10" exitCode=0 Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.320207 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gc68h" Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.320462 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gc68h" event={"ID":"8e09aaea-d05d-44ea-99cc-d01284c8bf35","Type":"ContainerDied","Data":"f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10"} Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.320485 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gc68h" event={"ID":"8e09aaea-d05d-44ea-99cc-d01284c8bf35","Type":"ContainerDied","Data":"38ac5a9d0bf098907d94611e0e9ffe6a25ede473ab16bd04f4a439d746aa5659"} Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.320504 4823 scope.go:117] "RemoveContainer" containerID="f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10" Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.340233 4823 scope.go:117] "RemoveContainer" containerID="f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10" Jan 26 15:01:59 crc kubenswrapper[4823]: E0126 15:01:59.340924 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10\": container with ID starting with f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10 not found: ID does not exist" containerID="f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10" Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.341029 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10"} err="failed to get container status \"f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10\": rpc error: code = NotFound desc = could not find container \"f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10\": container with ID starting with f80bb3ea007f94766385cda4d07e5361f9ff1885f524871fe2b1c7a3be3f3a10 not found: ID does not exist" Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.341454 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qvd8b" podStartSLOduration=2.291559173 podStartE2EDuration="2.34142959s" podCreationTimestamp="2026-01-26 15:01:57 +0000 UTC" firstStartedPulling="2026-01-26 15:01:58.841528687 +0000 UTC m=+915.526991792" lastFinishedPulling="2026-01-26 15:01:58.891399104 +0000 UTC m=+915.576862209" observedRunningTime="2026-01-26 15:01:59.335003436 +0000 UTC m=+916.020466541" watchObservedRunningTime="2026-01-26 15:01:59.34142959 +0000 UTC m=+916.026892695" Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.363191 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7cjn\" (UniqueName: \"kubernetes.io/projected/8e09aaea-d05d-44ea-99cc-d01284c8bf35-kube-api-access-w7cjn\") pod \"8e09aaea-d05d-44ea-99cc-d01284c8bf35\" (UID: \"8e09aaea-d05d-44ea-99cc-d01284c8bf35\") " Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.371717 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e09aaea-d05d-44ea-99cc-d01284c8bf35-kube-api-access-w7cjn" (OuterVolumeSpecName: "kube-api-access-w7cjn") pod "8e09aaea-d05d-44ea-99cc-d01284c8bf35" (UID: "8e09aaea-d05d-44ea-99cc-d01284c8bf35"). InnerVolumeSpecName "kube-api-access-w7cjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.464078 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7cjn\" (UniqueName: \"kubernetes.io/projected/8e09aaea-d05d-44ea-99cc-d01284c8bf35-kube-api-access-w7cjn\") on node \"crc\" DevicePath \"\"" Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.642001 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-gc68h"] Jan 26 15:01:59 crc kubenswrapper[4823]: I0126 15:01:59.645870 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-gc68h"] Jan 26 15:02:01 crc kubenswrapper[4823]: I0126 15:02:01.574621 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e09aaea-d05d-44ea-99cc-d01284c8bf35" path="/var/lib/kubelet/pods/8e09aaea-d05d-44ea-99cc-d01284c8bf35/volumes" Jan 26 15:02:08 crc kubenswrapper[4823]: I0126 15:02:08.196694 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qvd8b" Jan 26 15:02:08 crc kubenswrapper[4823]: I0126 15:02:08.197185 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qvd8b" Jan 26 15:02:08 crc kubenswrapper[4823]: I0126 15:02:08.235568 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qvd8b" Jan 26 15:02:08 crc kubenswrapper[4823]: I0126 15:02:08.451246 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qvd8b" Jan 26 15:02:08 crc kubenswrapper[4823]: I0126 15:02:08.550785 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-spxlq" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.103927 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9"] Jan 26 15:02:09 crc kubenswrapper[4823]: E0126 15:02:09.104205 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e09aaea-d05d-44ea-99cc-d01284c8bf35" containerName="registry-server" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.104223 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e09aaea-d05d-44ea-99cc-d01284c8bf35" containerName="registry-server" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.104346 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e09aaea-d05d-44ea-99cc-d01284c8bf35" containerName="registry-server" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.105305 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.107892 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-79q6l" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.116156 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9"] Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.215263 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2298b17e-b08f-4710-8417-f795aa095251-util\") pod \"10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9\" (UID: \"2298b17e-b08f-4710-8417-f795aa095251\") " pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.215374 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lczch\" (UniqueName: \"kubernetes.io/projected/2298b17e-b08f-4710-8417-f795aa095251-kube-api-access-lczch\") pod \"10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9\" (UID: \"2298b17e-b08f-4710-8417-f795aa095251\") " pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.215611 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2298b17e-b08f-4710-8417-f795aa095251-bundle\") pod \"10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9\" (UID: \"2298b17e-b08f-4710-8417-f795aa095251\") " pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.317081 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2298b17e-b08f-4710-8417-f795aa095251-util\") pod \"10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9\" (UID: \"2298b17e-b08f-4710-8417-f795aa095251\") " pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.317511 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lczch\" (UniqueName: \"kubernetes.io/projected/2298b17e-b08f-4710-8417-f795aa095251-kube-api-access-lczch\") pod \"10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9\" (UID: \"2298b17e-b08f-4710-8417-f795aa095251\") " pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.317555 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2298b17e-b08f-4710-8417-f795aa095251-bundle\") pod \"10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9\" (UID: \"2298b17e-b08f-4710-8417-f795aa095251\") " pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.317853 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2298b17e-b08f-4710-8417-f795aa095251-util\") pod \"10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9\" (UID: \"2298b17e-b08f-4710-8417-f795aa095251\") " pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.317953 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2298b17e-b08f-4710-8417-f795aa095251-bundle\") pod \"10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9\" (UID: \"2298b17e-b08f-4710-8417-f795aa095251\") " pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.344420 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lczch\" (UniqueName: \"kubernetes.io/projected/2298b17e-b08f-4710-8417-f795aa095251-kube-api-access-lczch\") pod \"10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9\" (UID: \"2298b17e-b08f-4710-8417-f795aa095251\") " pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.424305 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:09 crc kubenswrapper[4823]: I0126 15:02:09.657602 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9"] Jan 26 15:02:09 crc kubenswrapper[4823]: W0126 15:02:09.666134 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2298b17e_b08f_4710_8417_f795aa095251.slice/crio-ebf701470c95031f2ac9776cbd5187e67cf79c6a403f4f32bdf98d33de4088d7 WatchSource:0}: Error finding container ebf701470c95031f2ac9776cbd5187e67cf79c6a403f4f32bdf98d33de4088d7: Status 404 returned error can't find the container with id ebf701470c95031f2ac9776cbd5187e67cf79c6a403f4f32bdf98d33de4088d7 Jan 26 15:02:10 crc kubenswrapper[4823]: I0126 15:02:10.433613 4823 generic.go:334] "Generic (PLEG): container finished" podID="2298b17e-b08f-4710-8417-f795aa095251" containerID="991546f9600eb495795f9459476faed1bca4f74ffe1992612d989b691b59d25f" exitCode=0 Jan 26 15:02:10 crc kubenswrapper[4823]: I0126 15:02:10.433705 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" event={"ID":"2298b17e-b08f-4710-8417-f795aa095251","Type":"ContainerDied","Data":"991546f9600eb495795f9459476faed1bca4f74ffe1992612d989b691b59d25f"} Jan 26 15:02:10 crc kubenswrapper[4823]: I0126 15:02:10.434867 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" event={"ID":"2298b17e-b08f-4710-8417-f795aa095251","Type":"ContainerStarted","Data":"ebf701470c95031f2ac9776cbd5187e67cf79c6a403f4f32bdf98d33de4088d7"} Jan 26 15:02:11 crc kubenswrapper[4823]: I0126 15:02:11.448901 4823 generic.go:334] "Generic (PLEG): container finished" podID="2298b17e-b08f-4710-8417-f795aa095251" containerID="008a9a1c0ab98399d77dbf56cd36e46cc3832a6d5194166de82cc1a5d4873eeb" exitCode=0 Jan 26 15:02:11 crc kubenswrapper[4823]: I0126 15:02:11.449037 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" event={"ID":"2298b17e-b08f-4710-8417-f795aa095251","Type":"ContainerDied","Data":"008a9a1c0ab98399d77dbf56cd36e46cc3832a6d5194166de82cc1a5d4873eeb"} Jan 26 15:02:12 crc kubenswrapper[4823]: I0126 15:02:12.459864 4823 generic.go:334] "Generic (PLEG): container finished" podID="2298b17e-b08f-4710-8417-f795aa095251" containerID="7c8df046a342921fcd73802fb1a0334f305d1fd7cf3b97d09b2711e4a2f34ead" exitCode=0 Jan 26 15:02:12 crc kubenswrapper[4823]: I0126 15:02:12.459957 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" event={"ID":"2298b17e-b08f-4710-8417-f795aa095251","Type":"ContainerDied","Data":"7c8df046a342921fcd73802fb1a0334f305d1fd7cf3b97d09b2711e4a2f34ead"} Jan 26 15:02:13 crc kubenswrapper[4823]: I0126 15:02:13.787678 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:13 crc kubenswrapper[4823]: I0126 15:02:13.887474 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2298b17e-b08f-4710-8417-f795aa095251-bundle\") pod \"2298b17e-b08f-4710-8417-f795aa095251\" (UID: \"2298b17e-b08f-4710-8417-f795aa095251\") " Jan 26 15:02:13 crc kubenswrapper[4823]: I0126 15:02:13.887641 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lczch\" (UniqueName: \"kubernetes.io/projected/2298b17e-b08f-4710-8417-f795aa095251-kube-api-access-lczch\") pod \"2298b17e-b08f-4710-8417-f795aa095251\" (UID: \"2298b17e-b08f-4710-8417-f795aa095251\") " Jan 26 15:02:13 crc kubenswrapper[4823]: I0126 15:02:13.887708 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2298b17e-b08f-4710-8417-f795aa095251-util\") pod \"2298b17e-b08f-4710-8417-f795aa095251\" (UID: \"2298b17e-b08f-4710-8417-f795aa095251\") " Jan 26 15:02:13 crc kubenswrapper[4823]: I0126 15:02:13.888576 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2298b17e-b08f-4710-8417-f795aa095251-bundle" (OuterVolumeSpecName: "bundle") pod "2298b17e-b08f-4710-8417-f795aa095251" (UID: "2298b17e-b08f-4710-8417-f795aa095251"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:02:13 crc kubenswrapper[4823]: I0126 15:02:13.904946 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2298b17e-b08f-4710-8417-f795aa095251-util" (OuterVolumeSpecName: "util") pod "2298b17e-b08f-4710-8417-f795aa095251" (UID: "2298b17e-b08f-4710-8417-f795aa095251"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:02:13 crc kubenswrapper[4823]: I0126 15:02:13.906569 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2298b17e-b08f-4710-8417-f795aa095251-kube-api-access-lczch" (OuterVolumeSpecName: "kube-api-access-lczch") pod "2298b17e-b08f-4710-8417-f795aa095251" (UID: "2298b17e-b08f-4710-8417-f795aa095251"). InnerVolumeSpecName "kube-api-access-lczch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:02:13 crc kubenswrapper[4823]: I0126 15:02:13.989519 4823 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2298b17e-b08f-4710-8417-f795aa095251-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:02:13 crc kubenswrapper[4823]: I0126 15:02:13.989577 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lczch\" (UniqueName: \"kubernetes.io/projected/2298b17e-b08f-4710-8417-f795aa095251-kube-api-access-lczch\") on node \"crc\" DevicePath \"\"" Jan 26 15:02:13 crc kubenswrapper[4823]: I0126 15:02:13.989597 4823 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2298b17e-b08f-4710-8417-f795aa095251-util\") on node \"crc\" DevicePath \"\"" Jan 26 15:02:14 crc kubenswrapper[4823]: I0126 15:02:14.474397 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" event={"ID":"2298b17e-b08f-4710-8417-f795aa095251","Type":"ContainerDied","Data":"ebf701470c95031f2ac9776cbd5187e67cf79c6a403f4f32bdf98d33de4088d7"} Jan 26 15:02:14 crc kubenswrapper[4823]: I0126 15:02:14.474447 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ebf701470c95031f2ac9776cbd5187e67cf79c6a403f4f32bdf98d33de4088d7" Jan 26 15:02:14 crc kubenswrapper[4823]: I0126 15:02:14.474478 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9" Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.115109 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl"] Jan 26 15:02:22 crc kubenswrapper[4823]: E0126 15:02:22.116104 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2298b17e-b08f-4710-8417-f795aa095251" containerName="pull" Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.116117 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2298b17e-b08f-4710-8417-f795aa095251" containerName="pull" Jan 26 15:02:22 crc kubenswrapper[4823]: E0126 15:02:22.116134 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2298b17e-b08f-4710-8417-f795aa095251" containerName="util" Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.116141 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2298b17e-b08f-4710-8417-f795aa095251" containerName="util" Jan 26 15:02:22 crc kubenswrapper[4823]: E0126 15:02:22.116150 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2298b17e-b08f-4710-8417-f795aa095251" containerName="extract" Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.116158 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2298b17e-b08f-4710-8417-f795aa095251" containerName="extract" Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.116298 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2298b17e-b08f-4710-8417-f795aa095251" containerName="extract" Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.116895 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl" Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.124873 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-b9qmq" Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.156130 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl"] Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.211210 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmv4t\" (UniqueName: \"kubernetes.io/projected/e13df68d-7c37-42a7-b54f-0d6d248012ad-kube-api-access-wmv4t\") pod \"openstack-operator-controller-init-c4b5d4cc7-g5bhl\" (UID: \"e13df68d-7c37-42a7-b54f-0d6d248012ad\") " pod="openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl" Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.312430 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmv4t\" (UniqueName: \"kubernetes.io/projected/e13df68d-7c37-42a7-b54f-0d6d248012ad-kube-api-access-wmv4t\") pod \"openstack-operator-controller-init-c4b5d4cc7-g5bhl\" (UID: \"e13df68d-7c37-42a7-b54f-0d6d248012ad\") " pod="openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl" Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.332820 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmv4t\" (UniqueName: \"kubernetes.io/projected/e13df68d-7c37-42a7-b54f-0d6d248012ad-kube-api-access-wmv4t\") pod \"openstack-operator-controller-init-c4b5d4cc7-g5bhl\" (UID: \"e13df68d-7c37-42a7-b54f-0d6d248012ad\") " pod="openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl" Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.438669 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl" Jan 26 15:02:22 crc kubenswrapper[4823]: I0126 15:02:22.690836 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl"] Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.539427 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl" event={"ID":"e13df68d-7c37-42a7-b54f-0d6d248012ad","Type":"ContainerStarted","Data":"c6716285bfe19933f2cf8968bbb5899abb5e961471f47003a3f51bade35cc49c"} Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.547626 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6r62f"] Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.549745 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.558798 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6r62f"] Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.632472 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb8d46c-6101-406d-af62-3d1dc500dd93-utilities\") pod \"community-operators-6r62f\" (UID: \"2fb8d46c-6101-406d-af62-3d1dc500dd93\") " pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.632527 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb8d46c-6101-406d-af62-3d1dc500dd93-catalog-content\") pod \"community-operators-6r62f\" (UID: \"2fb8d46c-6101-406d-af62-3d1dc500dd93\") " pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.632703 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4hv6\" (UniqueName: \"kubernetes.io/projected/2fb8d46c-6101-406d-af62-3d1dc500dd93-kube-api-access-z4hv6\") pod \"community-operators-6r62f\" (UID: \"2fb8d46c-6101-406d-af62-3d1dc500dd93\") " pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.734334 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4hv6\" (UniqueName: \"kubernetes.io/projected/2fb8d46c-6101-406d-af62-3d1dc500dd93-kube-api-access-z4hv6\") pod \"community-operators-6r62f\" (UID: \"2fb8d46c-6101-406d-af62-3d1dc500dd93\") " pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.734485 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb8d46c-6101-406d-af62-3d1dc500dd93-utilities\") pod \"community-operators-6r62f\" (UID: \"2fb8d46c-6101-406d-af62-3d1dc500dd93\") " pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.734524 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb8d46c-6101-406d-af62-3d1dc500dd93-catalog-content\") pod \"community-operators-6r62f\" (UID: \"2fb8d46c-6101-406d-af62-3d1dc500dd93\") " pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.735241 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb8d46c-6101-406d-af62-3d1dc500dd93-catalog-content\") pod \"community-operators-6r62f\" (UID: \"2fb8d46c-6101-406d-af62-3d1dc500dd93\") " pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.735265 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb8d46c-6101-406d-af62-3d1dc500dd93-utilities\") pod \"community-operators-6r62f\" (UID: \"2fb8d46c-6101-406d-af62-3d1dc500dd93\") " pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.755713 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4hv6\" (UniqueName: \"kubernetes.io/projected/2fb8d46c-6101-406d-af62-3d1dc500dd93-kube-api-access-z4hv6\") pod \"community-operators-6r62f\" (UID: \"2fb8d46c-6101-406d-af62-3d1dc500dd93\") " pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:23 crc kubenswrapper[4823]: I0126 15:02:23.871842 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:24 crc kubenswrapper[4823]: I0126 15:02:24.159845 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6r62f"] Jan 26 15:02:24 crc kubenswrapper[4823]: I0126 15:02:24.570307 4823 generic.go:334] "Generic (PLEG): container finished" podID="2fb8d46c-6101-406d-af62-3d1dc500dd93" containerID="22721ee0511330b1b688b83472caf195f0dabedd770c719bbddd5d460d98a2c1" exitCode=0 Jan 26 15:02:24 crc kubenswrapper[4823]: I0126 15:02:24.570426 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6r62f" event={"ID":"2fb8d46c-6101-406d-af62-3d1dc500dd93","Type":"ContainerDied","Data":"22721ee0511330b1b688b83472caf195f0dabedd770c719bbddd5d460d98a2c1"} Jan 26 15:02:24 crc kubenswrapper[4823]: I0126 15:02:24.570475 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6r62f" event={"ID":"2fb8d46c-6101-406d-af62-3d1dc500dd93","Type":"ContainerStarted","Data":"5c5ca975727199c5faf0af8e4b7e1522641ccdfd12a514bbd96cc885c3c3408d"} Jan 26 15:02:25 crc kubenswrapper[4823]: I0126 15:02:25.584907 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6r62f" event={"ID":"2fb8d46c-6101-406d-af62-3d1dc500dd93","Type":"ContainerStarted","Data":"b2dc24828b45c1ab1d4b6971ccad6c294b2175cc89ee7205be3b67d82ce5086c"} Jan 26 15:02:26 crc kubenswrapper[4823]: I0126 15:02:26.598876 4823 generic.go:334] "Generic (PLEG): container finished" podID="2fb8d46c-6101-406d-af62-3d1dc500dd93" containerID="b2dc24828b45c1ab1d4b6971ccad6c294b2175cc89ee7205be3b67d82ce5086c" exitCode=0 Jan 26 15:02:26 crc kubenswrapper[4823]: I0126 15:02:26.599413 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6r62f" event={"ID":"2fb8d46c-6101-406d-af62-3d1dc500dd93","Type":"ContainerDied","Data":"b2dc24828b45c1ab1d4b6971ccad6c294b2175cc89ee7205be3b67d82ce5086c"} Jan 26 15:02:29 crc kubenswrapper[4823]: I0126 15:02:29.629185 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl" event={"ID":"e13df68d-7c37-42a7-b54f-0d6d248012ad","Type":"ContainerStarted","Data":"210881f150b4379c0e19f0f2e97513c568f00c7c74ed1c97c107a671829cdcfc"} Jan 26 15:02:29 crc kubenswrapper[4823]: I0126 15:02:29.631881 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl" Jan 26 15:02:29 crc kubenswrapper[4823]: I0126 15:02:29.635908 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6r62f" event={"ID":"2fb8d46c-6101-406d-af62-3d1dc500dd93","Type":"ContainerStarted","Data":"e4abdfed15b5af8458043ff7a650e57d9574b0ed957c754c80a5fadc75851ea3"} Jan 26 15:02:29 crc kubenswrapper[4823]: I0126 15:02:29.670403 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl" podStartSLOduration=1.323052016 podStartE2EDuration="7.670383364s" podCreationTimestamp="2026-01-26 15:02:22 +0000 UTC" firstStartedPulling="2026-01-26 15:02:22.707735593 +0000 UTC m=+939.393198698" lastFinishedPulling="2026-01-26 15:02:29.055066941 +0000 UTC m=+945.740530046" observedRunningTime="2026-01-26 15:02:29.664506354 +0000 UTC m=+946.349969469" watchObservedRunningTime="2026-01-26 15:02:29.670383364 +0000 UTC m=+946.355846469" Jan 26 15:02:29 crc kubenswrapper[4823]: I0126 15:02:29.693576 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6r62f" podStartSLOduration=2.240426211 podStartE2EDuration="6.693538405s" podCreationTimestamp="2026-01-26 15:02:23 +0000 UTC" firstStartedPulling="2026-01-26 15:02:24.573194734 +0000 UTC m=+941.258657829" lastFinishedPulling="2026-01-26 15:02:29.026306908 +0000 UTC m=+945.711770023" observedRunningTime="2026-01-26 15:02:29.68601814 +0000 UTC m=+946.371481245" watchObservedRunningTime="2026-01-26 15:02:29.693538405 +0000 UTC m=+946.379001510" Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.137189 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zr87g"] Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.139215 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.202032 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zr87g"] Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.271349 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/611c27c7-4cfa-4085-b83f-c75f17230ec3-utilities\") pod \"certified-operators-zr87g\" (UID: \"611c27c7-4cfa-4085-b83f-c75f17230ec3\") " pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.271448 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/611c27c7-4cfa-4085-b83f-c75f17230ec3-catalog-content\") pod \"certified-operators-zr87g\" (UID: \"611c27c7-4cfa-4085-b83f-c75f17230ec3\") " pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.271512 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf58l\" (UniqueName: \"kubernetes.io/projected/611c27c7-4cfa-4085-b83f-c75f17230ec3-kube-api-access-mf58l\") pod \"certified-operators-zr87g\" (UID: \"611c27c7-4cfa-4085-b83f-c75f17230ec3\") " pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.372973 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/611c27c7-4cfa-4085-b83f-c75f17230ec3-catalog-content\") pod \"certified-operators-zr87g\" (UID: \"611c27c7-4cfa-4085-b83f-c75f17230ec3\") " pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.373042 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf58l\" (UniqueName: \"kubernetes.io/projected/611c27c7-4cfa-4085-b83f-c75f17230ec3-kube-api-access-mf58l\") pod \"certified-operators-zr87g\" (UID: \"611c27c7-4cfa-4085-b83f-c75f17230ec3\") " pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.373110 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/611c27c7-4cfa-4085-b83f-c75f17230ec3-utilities\") pod \"certified-operators-zr87g\" (UID: \"611c27c7-4cfa-4085-b83f-c75f17230ec3\") " pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.373526 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/611c27c7-4cfa-4085-b83f-c75f17230ec3-catalog-content\") pod \"certified-operators-zr87g\" (UID: \"611c27c7-4cfa-4085-b83f-c75f17230ec3\") " pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.373555 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/611c27c7-4cfa-4085-b83f-c75f17230ec3-utilities\") pod \"certified-operators-zr87g\" (UID: \"611c27c7-4cfa-4085-b83f-c75f17230ec3\") " pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.398654 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf58l\" (UniqueName: \"kubernetes.io/projected/611c27c7-4cfa-4085-b83f-c75f17230ec3-kube-api-access-mf58l\") pod \"certified-operators-zr87g\" (UID: \"611c27c7-4cfa-4085-b83f-c75f17230ec3\") " pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.459175 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:31 crc kubenswrapper[4823]: I0126 15:02:31.987443 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zr87g"] Jan 26 15:02:32 crc kubenswrapper[4823]: I0126 15:02:32.655348 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr87g" event={"ID":"611c27c7-4cfa-4085-b83f-c75f17230ec3","Type":"ContainerStarted","Data":"73e7e3588c8d25e6d9de20b2695c0f73563a694f41efdea014c799d84d6fb2d1"} Jan 26 15:02:33 crc kubenswrapper[4823]: I0126 15:02:33.664213 4823 generic.go:334] "Generic (PLEG): container finished" podID="611c27c7-4cfa-4085-b83f-c75f17230ec3" containerID="c356676fcc751461d8045d905387079038f07f5f7cfa0331e52744ff29fa55ef" exitCode=0 Jan 26 15:02:33 crc kubenswrapper[4823]: I0126 15:02:33.664293 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr87g" event={"ID":"611c27c7-4cfa-4085-b83f-c75f17230ec3","Type":"ContainerDied","Data":"c356676fcc751461d8045d905387079038f07f5f7cfa0331e52744ff29fa55ef"} Jan 26 15:02:33 crc kubenswrapper[4823]: I0126 15:02:33.872494 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:33 crc kubenswrapper[4823]: I0126 15:02:33.872571 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:33 crc kubenswrapper[4823]: I0126 15:02:33.935865 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:34 crc kubenswrapper[4823]: I0126 15:02:34.674851 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr87g" event={"ID":"611c27c7-4cfa-4085-b83f-c75f17230ec3","Type":"ContainerStarted","Data":"bc822b868c5b225ff02a71dbe3742e311b7f440829b2c3e562a94fb5e15dba50"} Jan 26 15:02:34 crc kubenswrapper[4823]: I0126 15:02:34.721330 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:35 crc kubenswrapper[4823]: I0126 15:02:35.687150 4823 generic.go:334] "Generic (PLEG): container finished" podID="611c27c7-4cfa-4085-b83f-c75f17230ec3" containerID="bc822b868c5b225ff02a71dbe3742e311b7f440829b2c3e562a94fb5e15dba50" exitCode=0 Jan 26 15:02:35 crc kubenswrapper[4823]: I0126 15:02:35.687240 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr87g" event={"ID":"611c27c7-4cfa-4085-b83f-c75f17230ec3","Type":"ContainerDied","Data":"bc822b868c5b225ff02a71dbe3742e311b7f440829b2c3e562a94fb5e15dba50"} Jan 26 15:02:36 crc kubenswrapper[4823]: I0126 15:02:36.702374 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr87g" event={"ID":"611c27c7-4cfa-4085-b83f-c75f17230ec3","Type":"ContainerStarted","Data":"2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278"} Jan 26 15:02:36 crc kubenswrapper[4823]: I0126 15:02:36.724676 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zr87g" podStartSLOduration=3.088006622 podStartE2EDuration="5.724655658s" podCreationTimestamp="2026-01-26 15:02:31 +0000 UTC" firstStartedPulling="2026-01-26 15:02:33.667777818 +0000 UTC m=+950.353240923" lastFinishedPulling="2026-01-26 15:02:36.304426854 +0000 UTC m=+952.989889959" observedRunningTime="2026-01-26 15:02:36.722300945 +0000 UTC m=+953.407764050" watchObservedRunningTime="2026-01-26 15:02:36.724655658 +0000 UTC m=+953.410118763" Jan 26 15:02:37 crc kubenswrapper[4823]: I0126 15:02:37.124622 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6r62f"] Jan 26 15:02:37 crc kubenswrapper[4823]: I0126 15:02:37.124980 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6r62f" podUID="2fb8d46c-6101-406d-af62-3d1dc500dd93" containerName="registry-server" containerID="cri-o://e4abdfed15b5af8458043ff7a650e57d9574b0ed957c754c80a5fadc75851ea3" gracePeriod=2 Jan 26 15:02:37 crc kubenswrapper[4823]: I0126 15:02:37.723071 4823 generic.go:334] "Generic (PLEG): container finished" podID="2fb8d46c-6101-406d-af62-3d1dc500dd93" containerID="e4abdfed15b5af8458043ff7a650e57d9574b0ed957c754c80a5fadc75851ea3" exitCode=0 Jan 26 15:02:37 crc kubenswrapper[4823]: I0126 15:02:37.723171 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6r62f" event={"ID":"2fb8d46c-6101-406d-af62-3d1dc500dd93","Type":"ContainerDied","Data":"e4abdfed15b5af8458043ff7a650e57d9574b0ed957c754c80a5fadc75851ea3"} Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.089399 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.191614 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4hv6\" (UniqueName: \"kubernetes.io/projected/2fb8d46c-6101-406d-af62-3d1dc500dd93-kube-api-access-z4hv6\") pod \"2fb8d46c-6101-406d-af62-3d1dc500dd93\" (UID: \"2fb8d46c-6101-406d-af62-3d1dc500dd93\") " Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.191787 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb8d46c-6101-406d-af62-3d1dc500dd93-catalog-content\") pod \"2fb8d46c-6101-406d-af62-3d1dc500dd93\" (UID: \"2fb8d46c-6101-406d-af62-3d1dc500dd93\") " Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.191883 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb8d46c-6101-406d-af62-3d1dc500dd93-utilities\") pod \"2fb8d46c-6101-406d-af62-3d1dc500dd93\" (UID: \"2fb8d46c-6101-406d-af62-3d1dc500dd93\") " Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.192966 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fb8d46c-6101-406d-af62-3d1dc500dd93-utilities" (OuterVolumeSpecName: "utilities") pod "2fb8d46c-6101-406d-af62-3d1dc500dd93" (UID: "2fb8d46c-6101-406d-af62-3d1dc500dd93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.209912 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fb8d46c-6101-406d-af62-3d1dc500dd93-kube-api-access-z4hv6" (OuterVolumeSpecName: "kube-api-access-z4hv6") pod "2fb8d46c-6101-406d-af62-3d1dc500dd93" (UID: "2fb8d46c-6101-406d-af62-3d1dc500dd93"). InnerVolumeSpecName "kube-api-access-z4hv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.242979 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fb8d46c-6101-406d-af62-3d1dc500dd93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fb8d46c-6101-406d-af62-3d1dc500dd93" (UID: "2fb8d46c-6101-406d-af62-3d1dc500dd93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.293414 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fb8d46c-6101-406d-af62-3d1dc500dd93-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.293458 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fb8d46c-6101-406d-af62-3d1dc500dd93-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.293474 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4hv6\" (UniqueName: \"kubernetes.io/projected/2fb8d46c-6101-406d-af62-3d1dc500dd93-kube-api-access-z4hv6\") on node \"crc\" DevicePath \"\"" Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.732590 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6r62f" event={"ID":"2fb8d46c-6101-406d-af62-3d1dc500dd93","Type":"ContainerDied","Data":"5c5ca975727199c5faf0af8e4b7e1522641ccdfd12a514bbd96cc885c3c3408d"} Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.732675 4823 scope.go:117] "RemoveContainer" containerID="e4abdfed15b5af8458043ff7a650e57d9574b0ed957c754c80a5fadc75851ea3" Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.732678 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6r62f" Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.765004 4823 scope.go:117] "RemoveContainer" containerID="b2dc24828b45c1ab1d4b6971ccad6c294b2175cc89ee7205be3b67d82ce5086c" Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.768202 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6r62f"] Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.772766 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6r62f"] Jan 26 15:02:38 crc kubenswrapper[4823]: I0126 15:02:38.785511 4823 scope.go:117] "RemoveContainer" containerID="22721ee0511330b1b688b83472caf195f0dabedd770c719bbddd5d460d98a2c1" Jan 26 15:02:39 crc kubenswrapper[4823]: I0126 15:02:39.573585 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fb8d46c-6101-406d-af62-3d1dc500dd93" path="/var/lib/kubelet/pods/2fb8d46c-6101-406d-af62-3d1dc500dd93/volumes" Jan 26 15:02:41 crc kubenswrapper[4823]: I0126 15:02:41.459899 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:41 crc kubenswrapper[4823]: I0126 15:02:41.460599 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:41 crc kubenswrapper[4823]: I0126 15:02:41.532651 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:41 crc kubenswrapper[4823]: I0126 15:02:41.796893 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:42 crc kubenswrapper[4823]: I0126 15:02:42.123715 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zr87g"] Jan 26 15:02:42 crc kubenswrapper[4823]: I0126 15:02:42.443200 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-c4b5d4cc7-g5bhl" Jan 26 15:02:43 crc kubenswrapper[4823]: I0126 15:02:43.768618 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zr87g" podUID="611c27c7-4cfa-4085-b83f-c75f17230ec3" containerName="registry-server" containerID="cri-o://2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278" gracePeriod=2 Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.742072 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.790563 4823 generic.go:334] "Generic (PLEG): container finished" podID="611c27c7-4cfa-4085-b83f-c75f17230ec3" containerID="2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278" exitCode=0 Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.790649 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr87g" event={"ID":"611c27c7-4cfa-4085-b83f-c75f17230ec3","Type":"ContainerDied","Data":"2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278"} Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.790690 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zr87g" event={"ID":"611c27c7-4cfa-4085-b83f-c75f17230ec3","Type":"ContainerDied","Data":"73e7e3588c8d25e6d9de20b2695c0f73563a694f41efdea014c799d84d6fb2d1"} Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.790717 4823 scope.go:117] "RemoveContainer" containerID="2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278" Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.790939 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zr87g" Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.815107 4823 scope.go:117] "RemoveContainer" containerID="bc822b868c5b225ff02a71dbe3742e311b7f440829b2c3e562a94fb5e15dba50" Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.840443 4823 scope.go:117] "RemoveContainer" containerID="c356676fcc751461d8045d905387079038f07f5f7cfa0331e52744ff29fa55ef" Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.869023 4823 scope.go:117] "RemoveContainer" containerID="2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278" Jan 26 15:02:44 crc kubenswrapper[4823]: E0126 15:02:44.870076 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278\": container with ID starting with 2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278 not found: ID does not exist" containerID="2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278" Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.870146 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278"} err="failed to get container status \"2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278\": rpc error: code = NotFound desc = could not find container \"2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278\": container with ID starting with 2545f2d9fb9c4c19e250feb32ab29bf3d3ada49e63409869680a41ba6eb96278 not found: ID does not exist" Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.870187 4823 scope.go:117] "RemoveContainer" containerID="bc822b868c5b225ff02a71dbe3742e311b7f440829b2c3e562a94fb5e15dba50" Jan 26 15:02:44 crc kubenswrapper[4823]: E0126 15:02:44.871198 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc822b868c5b225ff02a71dbe3742e311b7f440829b2c3e562a94fb5e15dba50\": container with ID starting with bc822b868c5b225ff02a71dbe3742e311b7f440829b2c3e562a94fb5e15dba50 not found: ID does not exist" containerID="bc822b868c5b225ff02a71dbe3742e311b7f440829b2c3e562a94fb5e15dba50" Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.871327 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc822b868c5b225ff02a71dbe3742e311b7f440829b2c3e562a94fb5e15dba50"} err="failed to get container status \"bc822b868c5b225ff02a71dbe3742e311b7f440829b2c3e562a94fb5e15dba50\": rpc error: code = NotFound desc = could not find container \"bc822b868c5b225ff02a71dbe3742e311b7f440829b2c3e562a94fb5e15dba50\": container with ID starting with bc822b868c5b225ff02a71dbe3742e311b7f440829b2c3e562a94fb5e15dba50 not found: ID does not exist" Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.871462 4823 scope.go:117] "RemoveContainer" containerID="c356676fcc751461d8045d905387079038f07f5f7cfa0331e52744ff29fa55ef" Jan 26 15:02:44 crc kubenswrapper[4823]: E0126 15:02:44.872229 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c356676fcc751461d8045d905387079038f07f5f7cfa0331e52744ff29fa55ef\": container with ID starting with c356676fcc751461d8045d905387079038f07f5f7cfa0331e52744ff29fa55ef not found: ID does not exist" containerID="c356676fcc751461d8045d905387079038f07f5f7cfa0331e52744ff29fa55ef" Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.872275 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c356676fcc751461d8045d905387079038f07f5f7cfa0331e52744ff29fa55ef"} err="failed to get container status \"c356676fcc751461d8045d905387079038f07f5f7cfa0331e52744ff29fa55ef\": rpc error: code = NotFound desc = could not find container \"c356676fcc751461d8045d905387079038f07f5f7cfa0331e52744ff29fa55ef\": container with ID starting with c356676fcc751461d8045d905387079038f07f5f7cfa0331e52744ff29fa55ef not found: ID does not exist" Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.904313 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/611c27c7-4cfa-4085-b83f-c75f17230ec3-catalog-content\") pod \"611c27c7-4cfa-4085-b83f-c75f17230ec3\" (UID: \"611c27c7-4cfa-4085-b83f-c75f17230ec3\") " Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.904528 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/611c27c7-4cfa-4085-b83f-c75f17230ec3-utilities\") pod \"611c27c7-4cfa-4085-b83f-c75f17230ec3\" (UID: \"611c27c7-4cfa-4085-b83f-c75f17230ec3\") " Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.904652 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mf58l\" (UniqueName: \"kubernetes.io/projected/611c27c7-4cfa-4085-b83f-c75f17230ec3-kube-api-access-mf58l\") pod \"611c27c7-4cfa-4085-b83f-c75f17230ec3\" (UID: \"611c27c7-4cfa-4085-b83f-c75f17230ec3\") " Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.905574 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/611c27c7-4cfa-4085-b83f-c75f17230ec3-utilities" (OuterVolumeSpecName: "utilities") pod "611c27c7-4cfa-4085-b83f-c75f17230ec3" (UID: "611c27c7-4cfa-4085-b83f-c75f17230ec3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:02:44 crc kubenswrapper[4823]: I0126 15:02:44.913513 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/611c27c7-4cfa-4085-b83f-c75f17230ec3-kube-api-access-mf58l" (OuterVolumeSpecName: "kube-api-access-mf58l") pod "611c27c7-4cfa-4085-b83f-c75f17230ec3" (UID: "611c27c7-4cfa-4085-b83f-c75f17230ec3"). InnerVolumeSpecName "kube-api-access-mf58l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:02:45 crc kubenswrapper[4823]: I0126 15:02:45.006589 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/611c27c7-4cfa-4085-b83f-c75f17230ec3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:02:45 crc kubenswrapper[4823]: I0126 15:02:45.006628 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mf58l\" (UniqueName: \"kubernetes.io/projected/611c27c7-4cfa-4085-b83f-c75f17230ec3-kube-api-access-mf58l\") on node \"crc\" DevicePath \"\"" Jan 26 15:02:45 crc kubenswrapper[4823]: I0126 15:02:45.288853 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/611c27c7-4cfa-4085-b83f-c75f17230ec3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "611c27c7-4cfa-4085-b83f-c75f17230ec3" (UID: "611c27c7-4cfa-4085-b83f-c75f17230ec3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:02:45 crc kubenswrapper[4823]: I0126 15:02:45.311169 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/611c27c7-4cfa-4085-b83f-c75f17230ec3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:02:45 crc kubenswrapper[4823]: I0126 15:02:45.428927 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zr87g"] Jan 26 15:02:45 crc kubenswrapper[4823]: I0126 15:02:45.435347 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zr87g"] Jan 26 15:02:45 crc kubenswrapper[4823]: I0126 15:02:45.570729 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="611c27c7-4cfa-4085-b83f-c75f17230ec3" path="/var/lib/kubelet/pods/611c27c7-4cfa-4085-b83f-c75f17230ec3/volumes" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.604934 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl"] Jan 26 15:03:02 crc kubenswrapper[4823]: E0126 15:03:02.605973 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611c27c7-4cfa-4085-b83f-c75f17230ec3" containerName="extract-content" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.605990 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="611c27c7-4cfa-4085-b83f-c75f17230ec3" containerName="extract-content" Jan 26 15:03:02 crc kubenswrapper[4823]: E0126 15:03:02.606003 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb8d46c-6101-406d-af62-3d1dc500dd93" containerName="registry-server" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.606009 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb8d46c-6101-406d-af62-3d1dc500dd93" containerName="registry-server" Jan 26 15:03:02 crc kubenswrapper[4823]: E0126 15:03:02.606018 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611c27c7-4cfa-4085-b83f-c75f17230ec3" containerName="registry-server" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.606025 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="611c27c7-4cfa-4085-b83f-c75f17230ec3" containerName="registry-server" Jan 26 15:03:02 crc kubenswrapper[4823]: E0126 15:03:02.606039 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb8d46c-6101-406d-af62-3d1dc500dd93" containerName="extract-content" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.606045 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb8d46c-6101-406d-af62-3d1dc500dd93" containerName="extract-content" Jan 26 15:03:02 crc kubenswrapper[4823]: E0126 15:03:02.606054 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fb8d46c-6101-406d-af62-3d1dc500dd93" containerName="extract-utilities" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.606062 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fb8d46c-6101-406d-af62-3d1dc500dd93" containerName="extract-utilities" Jan 26 15:03:02 crc kubenswrapper[4823]: E0126 15:03:02.606072 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611c27c7-4cfa-4085-b83f-c75f17230ec3" containerName="extract-utilities" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.606078 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="611c27c7-4cfa-4085-b83f-c75f17230ec3" containerName="extract-utilities" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.606204 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fb8d46c-6101-406d-af62-3d1dc500dd93" containerName="registry-server" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.606216 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="611c27c7-4cfa-4085-b83f-c75f17230ec3" containerName="registry-server" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.606759 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.608873 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-br9st" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.609919 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.611004 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.612909 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-brfk2" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.632154 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.633344 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.636185 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.644381 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-ppj9q" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.645299 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.656528 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.674701 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.675757 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.680255 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78vqm\" (UniqueName: \"kubernetes.io/projected/394d042b-9673-4187-8e4a-b479dc07be27-kube-api-access-78vqm\") pod \"barbican-operator-controller-manager-7f86f8796f-szbhl\" (UID: \"394d042b-9673-4187-8e4a-b479dc07be27\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.680387 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c4tp\" (UniqueName: \"kubernetes.io/projected/d30f23ef-3901-419c-afd2-bce286e7bb01-kube-api-access-9c4tp\") pod \"designate-operator-controller-manager-b45d7bf98-f4qg7\" (UID: \"d30f23ef-3901-419c-afd2-bce286e7bb01\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.680435 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mw78\" (UniqueName: \"kubernetes.io/projected/bf60542f-f900-4d89-98f4-aeaa7878edda-kube-api-access-9mw78\") pod \"cinder-operator-controller-manager-7478f7dbf9-rgql2\" (UID: \"bf60542f-f900-4d89-98f4-aeaa7878edda\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.681656 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-lbwmt" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.682382 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.683500 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.685058 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-qzk54" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.698534 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.735505 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.783223 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c4tp\" (UniqueName: \"kubernetes.io/projected/d30f23ef-3901-419c-afd2-bce286e7bb01-kube-api-access-9c4tp\") pod \"designate-operator-controller-manager-b45d7bf98-f4qg7\" (UID: \"d30f23ef-3901-419c-afd2-bce286e7bb01\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.783291 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mw78\" (UniqueName: \"kubernetes.io/projected/bf60542f-f900-4d89-98f4-aeaa7878edda-kube-api-access-9mw78\") pod \"cinder-operator-controller-manager-7478f7dbf9-rgql2\" (UID: \"bf60542f-f900-4d89-98f4-aeaa7878edda\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.783349 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m9d6\" (UniqueName: \"kubernetes.io/projected/038238a3-7348-4fd5-ae41-3473ff6cd14d-kube-api-access-4m9d6\") pod \"heat-operator-controller-manager-594c8c9d5d-l9rwn\" (UID: \"038238a3-7348-4fd5-ae41-3473ff6cd14d\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.783406 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxrpn\" (UniqueName: \"kubernetes.io/projected/c133cb3a-ff1b-4819-90a2-91d0cecb0ed9-kube-api-access-kxrpn\") pod \"glance-operator-controller-manager-78fdd796fd-s7b2n\" (UID: \"c133cb3a-ff1b-4819-90a2-91d0cecb0ed9\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.783437 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78vqm\" (UniqueName: \"kubernetes.io/projected/394d042b-9673-4187-8e4a-b479dc07be27-kube-api-access-78vqm\") pod \"barbican-operator-controller-manager-7f86f8796f-szbhl\" (UID: \"394d042b-9673-4187-8e4a-b479dc07be27\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.790492 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.800631 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.804222 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-7xw87" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.813456 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c4tp\" (UniqueName: \"kubernetes.io/projected/d30f23ef-3901-419c-afd2-bce286e7bb01-kube-api-access-9c4tp\") pod \"designate-operator-controller-manager-b45d7bf98-f4qg7\" (UID: \"d30f23ef-3901-419c-afd2-bce286e7bb01\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.816136 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78vqm\" (UniqueName: \"kubernetes.io/projected/394d042b-9673-4187-8e4a-b479dc07be27-kube-api-access-78vqm\") pod \"barbican-operator-controller-manager-7f86f8796f-szbhl\" (UID: \"394d042b-9673-4187-8e4a-b479dc07be27\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.823174 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mw78\" (UniqueName: \"kubernetes.io/projected/bf60542f-f900-4d89-98f4-aeaa7878edda-kube-api-access-9mw78\") pod \"cinder-operator-controller-manager-7478f7dbf9-rgql2\" (UID: \"bf60542f-f900-4d89-98f4-aeaa7878edda\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.827766 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.829139 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.835168 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.835475 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.835914 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dc2rt" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.842257 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.847580 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.848872 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.854548 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-2fbrb" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.875629 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.886834 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxrpn\" (UniqueName: \"kubernetes.io/projected/c133cb3a-ff1b-4819-90a2-91d0cecb0ed9-kube-api-access-kxrpn\") pod \"glance-operator-controller-manager-78fdd796fd-s7b2n\" (UID: \"c133cb3a-ff1b-4819-90a2-91d0cecb0ed9\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.886959 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m9d6\" (UniqueName: \"kubernetes.io/projected/038238a3-7348-4fd5-ae41-3473ff6cd14d-kube-api-access-4m9d6\") pod \"heat-operator-controller-manager-594c8c9d5d-l9rwn\" (UID: \"038238a3-7348-4fd5-ae41-3473ff6cd14d\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.911979 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.913040 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.917796 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-f45nk" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.931150 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxrpn\" (UniqueName: \"kubernetes.io/projected/c133cb3a-ff1b-4819-90a2-91d0cecb0ed9-kube-api-access-kxrpn\") pod \"glance-operator-controller-manager-78fdd796fd-s7b2n\" (UID: \"c133cb3a-ff1b-4819-90a2-91d0cecb0ed9\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.936442 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.954456 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.955112 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.961423 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m9d6\" (UniqueName: \"kubernetes.io/projected/038238a3-7348-4fd5-ae41-3473ff6cd14d-kube-api-access-4m9d6\") pod \"heat-operator-controller-manager-594c8c9d5d-l9rwn\" (UID: \"038238a3-7348-4fd5-ae41-3473ff6cd14d\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.963562 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.965715 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59"] Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.975751 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.986262 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-cqjjp" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.990247 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert\") pod \"infra-operator-controller-manager-694cf4f878-zcxds\" (UID: \"2bc0a30b-01c7-4626-928b-fedcc58e373e\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.990334 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l65zg\" (UniqueName: \"kubernetes.io/projected/2bc0a30b-01c7-4626-928b-fedcc58e373e-kube-api-access-l65zg\") pod \"infra-operator-controller-manager-694cf4f878-zcxds\" (UID: \"2bc0a30b-01c7-4626-928b-fedcc58e373e\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.990377 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sn2n\" (UniqueName: \"kubernetes.io/projected/7cd351ff-1cb2-417e-9d45-5f16d7dc0a43-kube-api-access-4sn2n\") pod \"horizon-operator-controller-manager-77d5c5b54f-p2vfc\" (UID: \"7cd351ff-1cb2-417e-9d45-5f16d7dc0a43\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc" Jan 26 15:03:02 crc kubenswrapper[4823]: I0126 15:03:02.990414 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjnd7\" (UniqueName: \"kubernetes.io/projected/5b983d23-dbff-4482-b9fc-6fec60b1ab7f-kube-api-access-xjnd7\") pod \"ironic-operator-controller-manager-598f7747c9-lrrrj\" (UID: \"5b983d23-dbff-4482-b9fc-6fec60b1ab7f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.002457 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.003526 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.011493 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nq5hv" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.020515 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.029245 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.033164 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.062594 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.067035 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.068238 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.080167 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-tg9kl" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.092182 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sn2n\" (UniqueName: \"kubernetes.io/projected/7cd351ff-1cb2-417e-9d45-5f16d7dc0a43-kube-api-access-4sn2n\") pod \"horizon-operator-controller-manager-77d5c5b54f-p2vfc\" (UID: \"7cd351ff-1cb2-417e-9d45-5f16d7dc0a43\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.092276 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjnd7\" (UniqueName: \"kubernetes.io/projected/5b983d23-dbff-4482-b9fc-6fec60b1ab7f-kube-api-access-xjnd7\") pod \"ironic-operator-controller-manager-598f7747c9-lrrrj\" (UID: \"5b983d23-dbff-4482-b9fc-6fec60b1ab7f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.092325 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqnhf\" (UniqueName: \"kubernetes.io/projected/38cf7a4f-36ed-4af7-a896-27f163d35986-kube-api-access-fqnhf\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4\" (UID: \"38cf7a4f-36ed-4af7-a896-27f163d35986\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.092358 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sqjq\" (UniqueName: \"kubernetes.io/projected/0d61828c-0d9d-42d5-8fbe-dea8080b620e-kube-api-access-9sqjq\") pod \"manila-operator-controller-manager-78c6999f6f-xqx59\" (UID: \"0d61828c-0d9d-42d5-8fbe-dea8080b620e\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.092414 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqng\" (UniqueName: \"kubernetes.io/projected/f0f8a8c9-f69c-4eb4-b9cd-5abc6aca4c50-kube-api-access-6rqng\") pod \"keystone-operator-controller-manager-b8b6d4659-tlwrn\" (UID: \"f0f8a8c9-f69c-4eb4-b9cd-5abc6aca4c50\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.092464 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert\") pod \"infra-operator-controller-manager-694cf4f878-zcxds\" (UID: \"2bc0a30b-01c7-4626-928b-fedcc58e373e\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.092533 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l65zg\" (UniqueName: \"kubernetes.io/projected/2bc0a30b-01c7-4626-928b-fedcc58e373e-kube-api-access-l65zg\") pod \"infra-operator-controller-manager-694cf4f878-zcxds\" (UID: \"2bc0a30b-01c7-4626-928b-fedcc58e373e\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:03 crc kubenswrapper[4823]: E0126 15:03:03.095009 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:03:03 crc kubenswrapper[4823]: E0126 15:03:03.097194 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert podName:2bc0a30b-01c7-4626-928b-fedcc58e373e nodeName:}" failed. No retries permitted until 2026-01-26 15:03:03.595076218 +0000 UTC m=+980.280539323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert") pod "infra-operator-controller-manager-694cf4f878-zcxds" (UID: "2bc0a30b-01c7-4626-928b-fedcc58e373e") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.106410 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.124518 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sn2n\" (UniqueName: \"kubernetes.io/projected/7cd351ff-1cb2-417e-9d45-5f16d7dc0a43-kube-api-access-4sn2n\") pod \"horizon-operator-controller-manager-77d5c5b54f-p2vfc\" (UID: \"7cd351ff-1cb2-417e-9d45-5f16d7dc0a43\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.125905 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l65zg\" (UniqueName: \"kubernetes.io/projected/2bc0a30b-01c7-4626-928b-fedcc58e373e-kube-api-access-l65zg\") pod \"infra-operator-controller-manager-694cf4f878-zcxds\" (UID: \"2bc0a30b-01c7-4626-928b-fedcc58e373e\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.132086 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjnd7\" (UniqueName: \"kubernetes.io/projected/5b983d23-dbff-4482-b9fc-6fec60b1ab7f-kube-api-access-xjnd7\") pod \"ironic-operator-controller-manager-598f7747c9-lrrrj\" (UID: \"5b983d23-dbff-4482-b9fc-6fec60b1ab7f\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.134690 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.135691 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.139637 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-gdp8k" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.147480 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.155536 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.158953 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-ctbb5" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.166297 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.176334 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.195516 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2d7z\" (UniqueName: \"kubernetes.io/projected/16294fad-09f5-4781-83d7-82b25d1bc644-kube-api-access-g2d7z\") pod \"neutron-operator-controller-manager-78d58447c5-mrlhr\" (UID: \"16294fad-09f5-4781-83d7-82b25d1bc644\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.195951 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqnhf\" (UniqueName: \"kubernetes.io/projected/38cf7a4f-36ed-4af7-a896-27f163d35986-kube-api-access-fqnhf\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4\" (UID: \"38cf7a4f-36ed-4af7-a896-27f163d35986\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.196003 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sqjq\" (UniqueName: \"kubernetes.io/projected/0d61828c-0d9d-42d5-8fbe-dea8080b620e-kube-api-access-9sqjq\") pod \"manila-operator-controller-manager-78c6999f6f-xqx59\" (UID: \"0d61828c-0d9d-42d5-8fbe-dea8080b620e\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.196058 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rqng\" (UniqueName: \"kubernetes.io/projected/f0f8a8c9-f69c-4eb4-b9cd-5abc6aca4c50-kube-api-access-6rqng\") pod \"keystone-operator-controller-manager-b8b6d4659-tlwrn\" (UID: \"f0f8a8c9-f69c-4eb4-b9cd-5abc6aca4c50\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.203168 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.228798 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rqng\" (UniqueName: \"kubernetes.io/projected/f0f8a8c9-f69c-4eb4-b9cd-5abc6aca4c50-kube-api-access-6rqng\") pod \"keystone-operator-controller-manager-b8b6d4659-tlwrn\" (UID: \"f0f8a8c9-f69c-4eb4-b9cd-5abc6aca4c50\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.235148 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqnhf\" (UniqueName: \"kubernetes.io/projected/38cf7a4f-36ed-4af7-a896-27f163d35986-kube-api-access-fqnhf\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4\" (UID: \"38cf7a4f-36ed-4af7-a896-27f163d35986\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.249957 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sqjq\" (UniqueName: \"kubernetes.io/projected/0d61828c-0d9d-42d5-8fbe-dea8080b620e-kube-api-access-9sqjq\") pod \"manila-operator-controller-manager-78c6999f6f-xqx59\" (UID: \"0d61828c-0d9d-42d5-8fbe-dea8080b620e\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.254403 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.255476 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.259065 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-scgns" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.260433 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.261551 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.266251 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.269548 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-p6hnb" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.273989 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.278598 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.288307 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-lvn5b" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.298115 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2d7z\" (UniqueName: \"kubernetes.io/projected/16294fad-09f5-4781-83d7-82b25d1bc644-kube-api-access-g2d7z\") pod \"neutron-operator-controller-manager-78d58447c5-mrlhr\" (UID: \"16294fad-09f5-4781-83d7-82b25d1bc644\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.298227 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nlr7\" (UniqueName: \"kubernetes.io/projected/ee032756-312e-4349-842b-f9bc642f7c08-kube-api-access-2nlr7\") pod \"nova-operator-controller-manager-7bdb645866-9k7d5\" (UID: \"ee032756-312e-4349-842b-f9bc642f7c08\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.298254 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztskm\" (UniqueName: \"kubernetes.io/projected/586e8217-d8bb-4d02-bfae-39db746fb0ca-kube-api-access-ztskm\") pod \"octavia-operator-controller-manager-5f4cd88d46-2dzj6\" (UID: \"586e8217-d8bb-4d02-bfae-39db746fb0ca\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.301146 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.319833 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.331736 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.362136 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2d7z\" (UniqueName: \"kubernetes.io/projected/16294fad-09f5-4781-83d7-82b25d1bc644-kube-api-access-g2d7z\") pod \"neutron-operator-controller-manager-78d58447c5-mrlhr\" (UID: \"16294fad-09f5-4781-83d7-82b25d1bc644\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.385520 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.397105 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.398960 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.399980 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwxh4\" (UniqueName: \"kubernetes.io/projected/2cdca653-4a4b-4452-9a00-5667349cb42a-kube-api-access-cwxh4\") pod \"placement-operator-controller-manager-79d5ccc684-c56f9\" (UID: \"2cdca653-4a4b-4452-9a00-5667349cb42a\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.400049 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nlr7\" (UniqueName: \"kubernetes.io/projected/ee032756-312e-4349-842b-f9bc642f7c08-kube-api-access-2nlr7\") pod \"nova-operator-controller-manager-7bdb645866-9k7d5\" (UID: \"ee032756-312e-4349-842b-f9bc642f7c08\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.400096 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztskm\" (UniqueName: \"kubernetes.io/projected/586e8217-d8bb-4d02-bfae-39db746fb0ca-kube-api-access-ztskm\") pod \"octavia-operator-controller-manager-5f4cd88d46-2dzj6\" (UID: \"586e8217-d8bb-4d02-bfae-39db746fb0ca\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.400196 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7764\" (UniqueName: \"kubernetes.io/projected/df95f821-a1f5-488a-a730-9c3c2f39fd4c-kube-api-access-x7764\") pod \"ovn-operator-controller-manager-6f75f45d54-snjmz\" (UID: \"df95f821-a1f5-488a-a730-9c3c2f39fd4c\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.400234 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fcrf\" (UniqueName: \"kubernetes.io/projected/b30af672-528b-4f1d-8bbf-e96085248217-kube-api-access-8fcrf\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp\" (UID: \"b30af672-528b-4f1d-8bbf-e96085248217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.400266 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp\" (UID: \"b30af672-528b-4f1d-8bbf-e96085248217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.406869 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.406936 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-mjqv8" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.425193 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.442807 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.443922 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.451352 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-clnws" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.454269 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztskm\" (UniqueName: \"kubernetes.io/projected/586e8217-d8bb-4d02-bfae-39db746fb0ca-kube-api-access-ztskm\") pod \"octavia-operator-controller-manager-5f4cd88d46-2dzj6\" (UID: \"586e8217-d8bb-4d02-bfae-39db746fb0ca\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.460614 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nlr7\" (UniqueName: \"kubernetes.io/projected/ee032756-312e-4349-842b-f9bc642f7c08-kube-api-access-2nlr7\") pod \"nova-operator-controller-manager-7bdb645866-9k7d5\" (UID: \"ee032756-312e-4349-842b-f9bc642f7c08\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.489523 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.503789 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwxh4\" (UniqueName: \"kubernetes.io/projected/2cdca653-4a4b-4452-9a00-5667349cb42a-kube-api-access-cwxh4\") pod \"placement-operator-controller-manager-79d5ccc684-c56f9\" (UID: \"2cdca653-4a4b-4452-9a00-5667349cb42a\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.503925 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnjs5\" (UniqueName: \"kubernetes.io/projected/f6145a22-466d-42fa-995e-7e6a8c4ffcc2-kube-api-access-nnjs5\") pod \"swift-operator-controller-manager-547cbdb99f-5qs2p\" (UID: \"f6145a22-466d-42fa-995e-7e6a8c4ffcc2\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.503996 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7764\" (UniqueName: \"kubernetes.io/projected/df95f821-a1f5-488a-a730-9c3c2f39fd4c-kube-api-access-x7764\") pod \"ovn-operator-controller-manager-6f75f45d54-snjmz\" (UID: \"df95f821-a1f5-488a-a730-9c3c2f39fd4c\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.504023 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fcrf\" (UniqueName: \"kubernetes.io/projected/b30af672-528b-4f1d-8bbf-e96085248217-kube-api-access-8fcrf\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp\" (UID: \"b30af672-528b-4f1d-8bbf-e96085248217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.504054 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp\" (UID: \"b30af672-528b-4f1d-8bbf-e96085248217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:03 crc kubenswrapper[4823]: E0126 15:03:03.504247 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:03:03 crc kubenswrapper[4823]: E0126 15:03:03.504321 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert podName:b30af672-528b-4f1d-8bbf-e96085248217 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:04.004297954 +0000 UTC m=+980.689761059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" (UID: "b30af672-528b-4f1d-8bbf-e96085248217") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.512630 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.531161 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.532800 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7764\" (UniqueName: \"kubernetes.io/projected/df95f821-a1f5-488a-a730-9c3c2f39fd4c-kube-api-access-x7764\") pod \"ovn-operator-controller-manager-6f75f45d54-snjmz\" (UID: \"df95f821-a1f5-488a-a730-9c3c2f39fd4c\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.534001 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.536834 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwxh4\" (UniqueName: \"kubernetes.io/projected/2cdca653-4a4b-4452-9a00-5667349cb42a-kube-api-access-cwxh4\") pod \"placement-operator-controller-manager-79d5ccc684-c56f9\" (UID: \"2cdca653-4a4b-4452-9a00-5667349cb42a\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.538631 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fcrf\" (UniqueName: \"kubernetes.io/projected/b30af672-528b-4f1d-8bbf-e96085248217-kube-api-access-8fcrf\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp\" (UID: \"b30af672-528b-4f1d-8bbf-e96085248217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.605036 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkvlz\" (UniqueName: \"kubernetes.io/projected/7534725a-0a1c-4ef0-b5ce-e6b758b4a174-kube-api-access-rkvlz\") pod \"telemetry-operator-controller-manager-85cd9769bb-h4ckq\" (UID: \"7534725a-0a1c-4ef0-b5ce-e6b758b4a174\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.605123 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert\") pod \"infra-operator-controller-manager-694cf4f878-zcxds\" (UID: \"2bc0a30b-01c7-4626-928b-fedcc58e373e\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.605170 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnjs5\" (UniqueName: \"kubernetes.io/projected/f6145a22-466d-42fa-995e-7e6a8c4ffcc2-kube-api-access-nnjs5\") pod \"swift-operator-controller-manager-547cbdb99f-5qs2p\" (UID: \"f6145a22-466d-42fa-995e-7e6a8c4ffcc2\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p" Jan 26 15:03:03 crc kubenswrapper[4823]: E0126 15:03:03.605441 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:03:03 crc kubenswrapper[4823]: E0126 15:03:03.605608 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert podName:2bc0a30b-01c7-4626-928b-fedcc58e373e nodeName:}" failed. No retries permitted until 2026-01-26 15:03:04.605577562 +0000 UTC m=+981.291040727 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert") pod "infra-operator-controller-manager-694cf4f878-zcxds" (UID: "2bc0a30b-01c7-4626-928b-fedcc58e373e") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.608456 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.626658 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnjs5\" (UniqueName: \"kubernetes.io/projected/f6145a22-466d-42fa-995e-7e6a8c4ffcc2-kube-api-access-nnjs5\") pod \"swift-operator-controller-manager-547cbdb99f-5qs2p\" (UID: \"f6145a22-466d-42fa-995e-7e6a8c4ffcc2\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.632213 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.632277 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.632303 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-948cd64bd-tpsth"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.633348 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-948cd64bd-tpsth"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.633536 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.634767 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-lltvv"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.636663 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kld5x" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.638459 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.645783 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-lltvv"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.651729 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sqff7" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.681154 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.685244 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.690914 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.691013 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-crk8q" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.690914 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.704577 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.709592 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzsgq\" (UniqueName: \"kubernetes.io/projected/d89101c6-6415-47d7-8e82-65d8a7b3a961-kube-api-access-wzsgq\") pod \"test-operator-controller-manager-948cd64bd-tpsth\" (UID: \"d89101c6-6415-47d7-8e82-65d8a7b3a961\") " pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.709680 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p68s4\" (UniqueName: \"kubernetes.io/projected/78a7e26b-4eac-4604-82ff-ce393cf816b6-kube-api-access-p68s4\") pod \"watcher-operator-controller-manager-564965969-lltvv\" (UID: \"78a7e26b-4eac-4604-82ff-ce393cf816b6\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.709743 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkvlz\" (UniqueName: \"kubernetes.io/projected/7534725a-0a1c-4ef0-b5ce-e6b758b4a174-kube-api-access-rkvlz\") pod \"telemetry-operator-controller-manager-85cd9769bb-h4ckq\" (UID: \"7534725a-0a1c-4ef0-b5ce-e6b758b4a174\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.736346 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkvlz\" (UniqueName: \"kubernetes.io/projected/7534725a-0a1c-4ef0-b5ce-e6b758b4a174-kube-api-access-rkvlz\") pod \"telemetry-operator-controller-manager-85cd9769bb-h4ckq\" (UID: \"7534725a-0a1c-4ef0-b5ce-e6b758b4a174\") " pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.765924 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.772924 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x"] Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.773080 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.787520 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-dqshk" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.788220 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.818419 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.818577 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.818621 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzsgq\" (UniqueName: \"kubernetes.io/projected/d89101c6-6415-47d7-8e82-65d8a7b3a961-kube-api-access-wzsgq\") pod \"test-operator-controller-manager-948cd64bd-tpsth\" (UID: \"d89101c6-6415-47d7-8e82-65d8a7b3a961\") " pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.818657 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p68s4\" (UniqueName: \"kubernetes.io/projected/78a7e26b-4eac-4604-82ff-ce393cf816b6-kube-api-access-p68s4\") pod \"watcher-operator-controller-manager-564965969-lltvv\" (UID: \"78a7e26b-4eac-4604-82ff-ce393cf816b6\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.818700 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcs9w\" (UniqueName: \"kubernetes.io/projected/0e7ff918-aecf-4718-912b-d85f1dbd1799-kube-api-access-gcs9w\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.842914 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.854153 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzsgq\" (UniqueName: \"kubernetes.io/projected/d89101c6-6415-47d7-8e82-65d8a7b3a961-kube-api-access-wzsgq\") pod \"test-operator-controller-manager-948cd64bd-tpsth\" (UID: \"d89101c6-6415-47d7-8e82-65d8a7b3a961\") " pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.859823 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p68s4\" (UniqueName: \"kubernetes.io/projected/78a7e26b-4eac-4604-82ff-ce393cf816b6-kube-api-access-p68s4\") pod \"watcher-operator-controller-manager-564965969-lltvv\" (UID: \"78a7e26b-4eac-4604-82ff-ce393cf816b6\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.869801 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.919904 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89c6w\" (UniqueName: \"kubernetes.io/projected/13bd131b-e367-44a0-a552-bf7f2446f6c2-kube-api-access-89c6w\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2247x\" (UID: \"13bd131b-e367-44a0-a552-bf7f2446f6c2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.920009 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:03 crc kubenswrapper[4823]: E0126 15:03:03.920148 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:03:03 crc kubenswrapper[4823]: E0126 15:03:03.920229 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs podName:0e7ff918-aecf-4718-912b-d85f1dbd1799 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:04.420205052 +0000 UTC m=+981.105668157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs") pod "openstack-operator-controller-manager-7fc556f645-qgpp5" (UID: "0e7ff918-aecf-4718-912b-d85f1dbd1799") : secret "webhook-server-cert" not found Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.920315 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcs9w\" (UniqueName: \"kubernetes.io/projected/0e7ff918-aecf-4718-912b-d85f1dbd1799-kube-api-access-gcs9w\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.920426 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:03 crc kubenswrapper[4823]: E0126 15:03:03.920626 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:03:03 crc kubenswrapper[4823]: E0126 15:03:03.920757 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs podName:0e7ff918-aecf-4718-912b-d85f1dbd1799 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:04.420717286 +0000 UTC m=+981.106180381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs") pod "openstack-operator-controller-manager-7fc556f645-qgpp5" (UID: "0e7ff918-aecf-4718-912b-d85f1dbd1799") : secret "metrics-server-cert" not found Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.950837 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcs9w\" (UniqueName: \"kubernetes.io/projected/0e7ff918-aecf-4718-912b-d85f1dbd1799-kube-api-access-gcs9w\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.968854 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" Jan 26 15:03:03 crc kubenswrapper[4823]: I0126 15:03:03.985098 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.025002 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89c6w\" (UniqueName: \"kubernetes.io/projected/13bd131b-e367-44a0-a552-bf7f2446f6c2-kube-api-access-89c6w\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2247x\" (UID: \"13bd131b-e367-44a0-a552-bf7f2446f6c2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x" Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.025082 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp\" (UID: \"b30af672-528b-4f1d-8bbf-e96085248217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.025281 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.025357 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert podName:b30af672-528b-4f1d-8bbf-e96085248217 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:05.025332796 +0000 UTC m=+981.710795901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" (UID: "b30af672-528b-4f1d-8bbf-e96085248217") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.057200 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89c6w\" (UniqueName: \"kubernetes.io/projected/13bd131b-e367-44a0-a552-bf7f2446f6c2-kube-api-access-89c6w\") pod \"rabbitmq-cluster-operator-manager-668c99d594-2247x\" (UID: \"13bd131b-e367-44a0-a552-bf7f2446f6c2\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x" Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.178502 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x" Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.268575 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.277624 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn"] Jan 26 15:03:04 crc kubenswrapper[4823]: W0126 15:03:04.285311 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf60542f_f900_4d89_98f4_aeaa7878edda.slice/crio-7b151e0323cf8da5ffd412cb89fe878f7cbd2559d420f4efce524b87b68e12f7 WatchSource:0}: Error finding container 7b151e0323cf8da5ffd412cb89fe878f7cbd2559d420f4efce524b87b68e12f7: Status 404 returned error can't find the container with id 7b151e0323cf8da5ffd412cb89fe878f7cbd2559d420f4efce524b87b68e12f7 Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.286623 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.318428 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.328598 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.445971 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.446151 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.449225 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.449314 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs podName:0e7ff918-aecf-4718-912b-d85f1dbd1799 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:05.449278753 +0000 UTC m=+982.134741858 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs") pod "openstack-operator-controller-manager-7fc556f645-qgpp5" (UID: "0e7ff918-aecf-4718-912b-d85f1dbd1799") : secret "metrics-server-cert" not found Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.450212 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.450266 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs podName:0e7ff918-aecf-4718-912b-d85f1dbd1799 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:05.45024487 +0000 UTC m=+982.135707985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs") pod "openstack-operator-controller-manager-7fc556f645-qgpp5" (UID: "0e7ff918-aecf-4718-912b-d85f1dbd1799") : secret "webhook-server-cert" not found Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.507970 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.508066 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.652412 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert\") pod \"infra-operator-controller-manager-694cf4f878-zcxds\" (UID: \"2bc0a30b-01c7-4626-928b-fedcc58e373e\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.652643 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.652704 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert podName:2bc0a30b-01c7-4626-928b-fedcc58e373e nodeName:}" failed. No retries permitted until 2026-01-26 15:03:06.652687704 +0000 UTC m=+983.338150809 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert") pod "infra-operator-controller-manager-694cf4f878-zcxds" (UID: "2bc0a30b-01c7-4626-928b-fedcc58e373e") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.671219 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.696517 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.702354 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.706553 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.710342 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.735446 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.742910 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p"] Jan 26 15:03:04 crc kubenswrapper[4823]: W0126 15:03:04.757382 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf95f821_a1f5_488a_a730_9c3c2f39fd4c.slice/crio-5c85ff83269a045d519b831820f9d827902c8375cfc50b6eb0424dbcd501e7ae WatchSource:0}: Error finding container 5c85ff83269a045d519b831820f9d827902c8375cfc50b6eb0424dbcd501e7ae: Status 404 returned error can't find the container with id 5c85ff83269a045d519b831820f9d827902c8375cfc50b6eb0424dbcd501e7ae Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.913444 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.926755 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-948cd64bd-tpsth"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.930489 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4"] Jan 26 15:03:04 crc kubenswrapper[4823]: W0126 15:03:04.930563 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7534725a_0a1c_4ef0_b5ce_e6b758b4a174.slice/crio-ff232872725ed7cc4ddac06da00e1c0b8511f43e5b975e94bc141ec63605b990 WatchSource:0}: Error finding container ff232872725ed7cc4ddac06da00e1c0b8511f43e5b975e94bc141ec63605b990: Status 404 returned error can't find the container with id ff232872725ed7cc4ddac06da00e1c0b8511f43e5b975e94bc141ec63605b990 Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.935794 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9"] Jan 26 15:03:04 crc kubenswrapper[4823]: W0126 15:03:04.943707 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38cf7a4f_36ed_4af7_a896_27f163d35986.slice/crio-2f0f5ea990a8936dfcaa22de889c0bf8965a21b7a6a7444c1dd04e5c576f5cbf WatchSource:0}: Error finding container 2f0f5ea990a8936dfcaa22de889c0bf8965a21b7a6a7444c1dd04e5c576f5cbf: Status 404 returned error can't find the container with id 2f0f5ea990a8936dfcaa22de889c0bf8965a21b7a6a7444c1dd04e5c576f5cbf Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.950244 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fqnhf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4_openstack-operators(38cf7a4f-36ed-4af7-a896-27f163d35986): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.952263 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" podUID="38cf7a4f-36ed-4af7-a896-27f163d35986" Jan 26 15:03:04 crc kubenswrapper[4823]: W0126 15:03:04.958534 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cdca653_4a4b_4452_9a00_5667349cb42a.slice/crio-f1ca5db09d5e355f48cd201203d17abc5a43b52c6fd38d4ea7838905a1507af2 WatchSource:0}: Error finding container f1ca5db09d5e355f48cd201203d17abc5a43b52c6fd38d4ea7838905a1507af2: Status 404 returned error can't find the container with id f1ca5db09d5e355f48cd201203d17abc5a43b52c6fd38d4ea7838905a1507af2 Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.958747 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.66:5001/openstack-k8s-operators/test-operator:03cfa8a5d87dc9740d2bd0cb2c0de5575ca0b56d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wzsgq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-948cd64bd-tpsth_openstack-operators(d89101c6-6415-47d7-8e82-65d8a7b3a961): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.960571 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" podUID="d89101c6-6415-47d7-8e82-65d8a7b3a961" Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.962132 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cwxh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-c56f9_openstack-operators(2cdca653-4a4b-4452-9a00-5667349cb42a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:03:04 crc kubenswrapper[4823]: E0126 15:03:04.963605 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" podUID="2cdca653-4a4b-4452-9a00-5667349cb42a" Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.972384 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.987086 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5"] Jan 26 15:03:04 crc kubenswrapper[4823]: I0126 15:03:04.999378 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-lltvv"] Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.001500 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nlr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-7bdb645866-9k7d5_openstack-operators(ee032756-312e-4349-842b-f9bc642f7c08): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.001627 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ztskm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-2dzj6_openstack-operators(586e8217-d8bb-4d02-bfae-39db746fb0ca): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.001729 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p68s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-lltvv_openstack-operators(78a7e26b-4eac-4604-82ff-ce393cf816b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.003007 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" podUID="78a7e26b-4eac-4604-82ff-ce393cf816b6" Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.003083 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" podUID="ee032756-312e-4349-842b-f9bc642f7c08" Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.003112 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" podUID="586e8217-d8bb-4d02-bfae-39db746fb0ca" Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.007112 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x"] Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.011106 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" event={"ID":"586e8217-d8bb-4d02-bfae-39db746fb0ca","Type":"ContainerStarted","Data":"5a6e34532ba0c03fea9eee9e208f09478c8ac89c4cfec98a50c8aa1cd3d99b2d"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.013340 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" event={"ID":"78a7e26b-4eac-4604-82ff-ce393cf816b6","Type":"ContainerStarted","Data":"64071587f1be46eb87afda83d6bac06171ccf3add15a6acf6f9d980d4ddcb68c"} Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.013824 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" podUID="586e8217-d8bb-4d02-bfae-39db746fb0ca" Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.014673 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" podUID="78a7e26b-4eac-4604-82ff-ce393cf816b6" Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.024465 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz" event={"ID":"df95f821-a1f5-488a-a730-9c3c2f39fd4c","Type":"ContainerStarted","Data":"5c85ff83269a045d519b831820f9d827902c8375cfc50b6eb0424dbcd501e7ae"} Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.025764 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-89c6w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-2247x_openstack-operators(13bd131b-e367-44a0-a552-bf7f2446f6c2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.029050 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x" podUID="13bd131b-e367-44a0-a552-bf7f2446f6c2" Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.031293 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2" event={"ID":"bf60542f-f900-4d89-98f4-aeaa7878edda","Type":"ContainerStarted","Data":"7b151e0323cf8da5ffd412cb89fe878f7cbd2559d420f4efce524b87b68e12f7"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.039041 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59" event={"ID":"0d61828c-0d9d-42d5-8fbe-dea8080b620e","Type":"ContainerStarted","Data":"a85b04aaff10d17d2e7a3cad8231bdcc85022454b996d644f3180b444bacfb3c"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.049061 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" event={"ID":"ee032756-312e-4349-842b-f9bc642f7c08","Type":"ContainerStarted","Data":"31cfb36e4edba249f15a472dbb6797473ef78edd7fa65c7e7a757ddf892c5ec2"} Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.051632 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" podUID="ee032756-312e-4349-842b-f9bc642f7c08" Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.053994 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7" event={"ID":"d30f23ef-3901-419c-afd2-bce286e7bb01","Type":"ContainerStarted","Data":"18e5fb5dd40d46c431beda84bcccd44ac6b016cb7ff23dc2a9a69855bc4e744c"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.058407 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp\" (UID: \"b30af672-528b-4f1d-8bbf-e96085248217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.058601 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.060168 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" event={"ID":"38cf7a4f-36ed-4af7-a896-27f163d35986","Type":"ContainerStarted","Data":"2f0f5ea990a8936dfcaa22de889c0bf8965a21b7a6a7444c1dd04e5c576f5cbf"} Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.060561 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert podName:b30af672-528b-4f1d-8bbf-e96085248217 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:07.06050401 +0000 UTC m=+983.745967335 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" (UID: "b30af672-528b-4f1d-8bbf-e96085248217") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.062235 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" podUID="38cf7a4f-36ed-4af7-a896-27f163d35986" Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.062779 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" event={"ID":"16294fad-09f5-4781-83d7-82b25d1bc644","Type":"ContainerStarted","Data":"0a651b06e62e0d5a5cc5789db93cec9546c6e607dbaf4c99751c1b6834a28420"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.064740 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl" event={"ID":"394d042b-9673-4187-8e4a-b479dc07be27","Type":"ContainerStarted","Data":"dbf23618f5b18c0db91e6a8bd5da31ad9f4697c697c653be2fc8faa1f32c3b18"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.086343 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p" event={"ID":"f6145a22-466d-42fa-995e-7e6a8c4ffcc2","Type":"ContainerStarted","Data":"c85e8f60a7597a0922643895a095469bfc4148730a7b69cab9a3eee36c9aec1c"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.088019 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq" event={"ID":"7534725a-0a1c-4ef0-b5ce-e6b758b4a174","Type":"ContainerStarted","Data":"ff232872725ed7cc4ddac06da00e1c0b8511f43e5b975e94bc141ec63605b990"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.088728 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" event={"ID":"2cdca653-4a4b-4452-9a00-5667349cb42a","Type":"ContainerStarted","Data":"f1ca5db09d5e355f48cd201203d17abc5a43b52c6fd38d4ea7838905a1507af2"} Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.089791 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" podUID="2cdca653-4a4b-4452-9a00-5667349cb42a" Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.090138 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn" event={"ID":"f0f8a8c9-f69c-4eb4-b9cd-5abc6aca4c50","Type":"ContainerStarted","Data":"968ec99ca70ed716be4d09ed4329d8706cbdcfa909e9a63819e818fdb2c4d46f"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.090768 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc" event={"ID":"7cd351ff-1cb2-417e-9d45-5f16d7dc0a43","Type":"ContainerStarted","Data":"1842166bb6445d197a8ac8ca7d8ed7e2ad7756be6599fa3e5f7f14937a9c9325"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.091413 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn" event={"ID":"038238a3-7348-4fd5-ae41-3473ff6cd14d","Type":"ContainerStarted","Data":"0bcfaccb5e4ff680421a824eea81a32309d0cfcb19cca2ec917c4ece5f421423"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.092284 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" event={"ID":"d89101c6-6415-47d7-8e82-65d8a7b3a961","Type":"ContainerStarted","Data":"4046e6a6952942a2f89e6cae535f3bdc9a413f457112ea994afafaae88bde3ca"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.092891 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj" event={"ID":"5b983d23-dbff-4482-b9fc-6fec60b1ab7f","Type":"ContainerStarted","Data":"bbab3c1b44ce2fda5145d8a72e912177acadc5600ab7917fb1a89c72f9c91681"} Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.093456 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n" event={"ID":"c133cb3a-ff1b-4819-90a2-91d0cecb0ed9","Type":"ContainerStarted","Data":"5b9589ced02923411af22fcdd81d14f91ead87faf5b4a0cd691522a63e4c85bd"} Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.093672 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.66:5001/openstack-k8s-operators/test-operator:03cfa8a5d87dc9740d2bd0cb2c0de5575ca0b56d\\\"\"" pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" podUID="d89101c6-6415-47d7-8e82-65d8a7b3a961" Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.465308 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.465500 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.465668 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs podName:0e7ff918-aecf-4718-912b-d85f1dbd1799 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:07.465645935 +0000 UTC m=+984.151109040 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs") pod "openstack-operator-controller-manager-7fc556f645-qgpp5" (UID: "0e7ff918-aecf-4718-912b-d85f1dbd1799") : secret "webhook-server-cert" not found Jan 26 15:03:05 crc kubenswrapper[4823]: I0126 15:03:05.466286 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.466427 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:03:05 crc kubenswrapper[4823]: E0126 15:03:05.466466 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs podName:0e7ff918-aecf-4718-912b-d85f1dbd1799 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:07.466458386 +0000 UTC m=+984.151921491 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs") pod "openstack-operator-controller-manager-7fc556f645-qgpp5" (UID: "0e7ff918-aecf-4718-912b-d85f1dbd1799") : secret "metrics-server-cert" not found Jan 26 15:03:06 crc kubenswrapper[4823]: I0126 15:03:06.125522 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x" event={"ID":"13bd131b-e367-44a0-a552-bf7f2446f6c2","Type":"ContainerStarted","Data":"172ff9372c54c41ade543b4d82abaa05aba4c98759829429f0ba2bf610952513"} Jan 26 15:03:06 crc kubenswrapper[4823]: E0126 15:03:06.139090 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" podUID="586e8217-d8bb-4d02-bfae-39db746fb0ca" Jan 26 15:03:06 crc kubenswrapper[4823]: E0126 15:03:06.139250 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.66:5001/openstack-k8s-operators/test-operator:03cfa8a5d87dc9740d2bd0cb2c0de5575ca0b56d\\\"\"" pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" podUID="d89101c6-6415-47d7-8e82-65d8a7b3a961" Jan 26 15:03:06 crc kubenswrapper[4823]: E0126 15:03:06.139286 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" podUID="78a7e26b-4eac-4604-82ff-ce393cf816b6" Jan 26 15:03:06 crc kubenswrapper[4823]: E0126 15:03:06.139467 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" podUID="2cdca653-4a4b-4452-9a00-5667349cb42a" Jan 26 15:03:06 crc kubenswrapper[4823]: E0126 15:03:06.139464 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" podUID="38cf7a4f-36ed-4af7-a896-27f163d35986" Jan 26 15:03:06 crc kubenswrapper[4823]: E0126 15:03:06.139580 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x" podUID="13bd131b-e367-44a0-a552-bf7f2446f6c2" Jan 26 15:03:06 crc kubenswrapper[4823]: E0126 15:03:06.139835 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:8abfbec47f0119a6c22c61a0ff80a4b1c6c14439a327bc75d4c529c5d8f59658\\\"\"" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" podUID="ee032756-312e-4349-842b-f9bc642f7c08" Jan 26 15:03:06 crc kubenswrapper[4823]: I0126 15:03:06.703575 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert\") pod \"infra-operator-controller-manager-694cf4f878-zcxds\" (UID: \"2bc0a30b-01c7-4626-928b-fedcc58e373e\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:06 crc kubenswrapper[4823]: E0126 15:03:06.703878 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:03:06 crc kubenswrapper[4823]: E0126 15:03:06.704016 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert podName:2bc0a30b-01c7-4626-928b-fedcc58e373e nodeName:}" failed. No retries permitted until 2026-01-26 15:03:10.703987023 +0000 UTC m=+987.389450128 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert") pod "infra-operator-controller-manager-694cf4f878-zcxds" (UID: "2bc0a30b-01c7-4626-928b-fedcc58e373e") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:03:07 crc kubenswrapper[4823]: I0126 15:03:07.115649 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp\" (UID: \"b30af672-528b-4f1d-8bbf-e96085248217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:07 crc kubenswrapper[4823]: E0126 15:03:07.120380 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:03:07 crc kubenswrapper[4823]: E0126 15:03:07.120649 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert podName:b30af672-528b-4f1d-8bbf-e96085248217 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:11.120629032 +0000 UTC m=+987.806092137 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" (UID: "b30af672-528b-4f1d-8bbf-e96085248217") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:03:07 crc kubenswrapper[4823]: E0126 15:03:07.148195 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x" podUID="13bd131b-e367-44a0-a552-bf7f2446f6c2" Jan 26 15:03:07 crc kubenswrapper[4823]: I0126 15:03:07.556545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:07 crc kubenswrapper[4823]: I0126 15:03:07.556683 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:07 crc kubenswrapper[4823]: E0126 15:03:07.556960 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:03:07 crc kubenswrapper[4823]: E0126 15:03:07.557099 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs podName:0e7ff918-aecf-4718-912b-d85f1dbd1799 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:11.557050631 +0000 UTC m=+988.242513736 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs") pod "openstack-operator-controller-manager-7fc556f645-qgpp5" (UID: "0e7ff918-aecf-4718-912b-d85f1dbd1799") : secret "webhook-server-cert" not found Jan 26 15:03:07 crc kubenswrapper[4823]: E0126 15:03:07.557295 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:03:07 crc kubenswrapper[4823]: E0126 15:03:07.557436 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs podName:0e7ff918-aecf-4718-912b-d85f1dbd1799 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:11.55740359 +0000 UTC m=+988.242866695 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs") pod "openstack-operator-controller-manager-7fc556f645-qgpp5" (UID: "0e7ff918-aecf-4718-912b-d85f1dbd1799") : secret "metrics-server-cert" not found Jan 26 15:03:10 crc kubenswrapper[4823]: I0126 15:03:10.720067 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert\") pod \"infra-operator-controller-manager-694cf4f878-zcxds\" (UID: \"2bc0a30b-01c7-4626-928b-fedcc58e373e\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:10 crc kubenswrapper[4823]: E0126 15:03:10.720296 4823 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 26 15:03:10 crc kubenswrapper[4823]: E0126 15:03:10.721246 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert podName:2bc0a30b-01c7-4626-928b-fedcc58e373e nodeName:}" failed. No retries permitted until 2026-01-26 15:03:18.721219368 +0000 UTC m=+995.406682474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert") pod "infra-operator-controller-manager-694cf4f878-zcxds" (UID: "2bc0a30b-01c7-4626-928b-fedcc58e373e") : secret "infra-operator-webhook-server-cert" not found Jan 26 15:03:11 crc kubenswrapper[4823]: I0126 15:03:11.126574 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp\" (UID: \"b30af672-528b-4f1d-8bbf-e96085248217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:11 crc kubenswrapper[4823]: E0126 15:03:11.126872 4823 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:03:11 crc kubenswrapper[4823]: E0126 15:03:11.127003 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert podName:b30af672-528b-4f1d-8bbf-e96085248217 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:19.126974399 +0000 UTC m=+995.812437504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" (UID: "b30af672-528b-4f1d-8bbf-e96085248217") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 26 15:03:11 crc kubenswrapper[4823]: I0126 15:03:11.633337 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:11 crc kubenswrapper[4823]: I0126 15:03:11.633504 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:11 crc kubenswrapper[4823]: E0126 15:03:11.633697 4823 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 26 15:03:11 crc kubenswrapper[4823]: E0126 15:03:11.633786 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs podName:0e7ff918-aecf-4718-912b-d85f1dbd1799 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:19.633760882 +0000 UTC m=+996.319223987 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs") pod "openstack-operator-controller-manager-7fc556f645-qgpp5" (UID: "0e7ff918-aecf-4718-912b-d85f1dbd1799") : secret "webhook-server-cert" not found Jan 26 15:03:11 crc kubenswrapper[4823]: E0126 15:03:11.633816 4823 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 26 15:03:11 crc kubenswrapper[4823]: E0126 15:03:11.634004 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs podName:0e7ff918-aecf-4718-912b-d85f1dbd1799 nodeName:}" failed. No retries permitted until 2026-01-26 15:03:19.633964307 +0000 UTC m=+996.319427462 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs") pod "openstack-operator-controller-manager-7fc556f645-qgpp5" (UID: "0e7ff918-aecf-4718-912b-d85f1dbd1799") : secret "metrics-server-cert" not found Jan 26 15:03:18 crc kubenswrapper[4823]: I0126 15:03:18.760762 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert\") pod \"infra-operator-controller-manager-694cf4f878-zcxds\" (UID: \"2bc0a30b-01c7-4626-928b-fedcc58e373e\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:18 crc kubenswrapper[4823]: I0126 15:03:18.772941 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2bc0a30b-01c7-4626-928b-fedcc58e373e-cert\") pod \"infra-operator-controller-manager-694cf4f878-zcxds\" (UID: \"2bc0a30b-01c7-4626-928b-fedcc58e373e\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:18 crc kubenswrapper[4823]: I0126 15:03:18.819318 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:19 crc kubenswrapper[4823]: I0126 15:03:19.169953 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp\" (UID: \"b30af672-528b-4f1d-8bbf-e96085248217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:19 crc kubenswrapper[4823]: I0126 15:03:19.174197 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b30af672-528b-4f1d-8bbf-e96085248217-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp\" (UID: \"b30af672-528b-4f1d-8bbf-e96085248217\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:19 crc kubenswrapper[4823]: I0126 15:03:19.251555 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:19 crc kubenswrapper[4823]: I0126 15:03:19.680691 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:19 crc kubenswrapper[4823]: I0126 15:03:19.681545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:19 crc kubenswrapper[4823]: I0126 15:03:19.689714 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-metrics-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:19 crc kubenswrapper[4823]: I0126 15:03:19.691441 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0e7ff918-aecf-4718-912b-d85f1dbd1799-webhook-certs\") pod \"openstack-operator-controller-manager-7fc556f645-qgpp5\" (UID: \"0e7ff918-aecf-4718-912b-d85f1dbd1799\") " pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:19 crc kubenswrapper[4823]: I0126 15:03:19.938798 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:20 crc kubenswrapper[4823]: E0126 15:03:20.351498 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 26 15:03:20 crc kubenswrapper[4823]: E0126 15:03:20.351852 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g2d7z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-mrlhr_openstack-operators(16294fad-09f5-4781-83d7-82b25d1bc644): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:03:20 crc kubenswrapper[4823]: E0126 15:03:20.353419 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" podUID="16294fad-09f5-4781-83d7-82b25d1bc644" Jan 26 15:03:20 crc kubenswrapper[4823]: E0126 15:03:20.940751 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 26 15:03:20 crc kubenswrapper[4823]: E0126 15:03:20.941167 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9sqjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-xqx59_openstack-operators(0d61828c-0d9d-42d5-8fbe-dea8080b620e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:03:20 crc kubenswrapper[4823]: E0126 15:03:20.942328 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59" podUID="0d61828c-0d9d-42d5-8fbe-dea8080b620e" Jan 26 15:03:21 crc kubenswrapper[4823]: E0126 15:03:21.263881 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59" podUID="0d61828c-0d9d-42d5-8fbe-dea8080b620e" Jan 26 15:03:21 crc kubenswrapper[4823]: E0126 15:03:21.265928 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" podUID="16294fad-09f5-4781-83d7-82b25d1bc644" Jan 26 15:03:22 crc kubenswrapper[4823]: E0126 15:03:22.080049 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 26 15:03:22 crc kubenswrapper[4823]: E0126 15:03:22.080764 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7764,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-snjmz_openstack-operators(df95f821-a1f5-488a-a730-9c3c2f39fd4c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:03:22 crc kubenswrapper[4823]: E0126 15:03:22.082008 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz" podUID="df95f821-a1f5-488a-a730-9c3c2f39fd4c" Jan 26 15:03:22 crc kubenswrapper[4823]: E0126 15:03:22.273426 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz" podUID="df95f821-a1f5-488a-a730-9c3c2f39fd4c" Jan 26 15:03:22 crc kubenswrapper[4823]: E0126 15:03:22.735158 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd" Jan 26 15:03:22 crc kubenswrapper[4823]: E0126 15:03:22.735522 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-78vqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7f86f8796f-szbhl_openstack-operators(394d042b-9673-4187-8e4a-b479dc07be27): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:03:22 crc kubenswrapper[4823]: E0126 15:03:22.738497 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl" podUID="394d042b-9673-4187-8e4a-b479dc07be27" Jan 26 15:03:23 crc kubenswrapper[4823]: E0126 15:03:23.278911 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:c94116e32fb9af850accd9d7ae46765559eef3fbe2ba75472c1c1ac91b2c33fd\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl" podUID="394d042b-9673-4187-8e4a-b479dc07be27" Jan 26 15:03:23 crc kubenswrapper[4823]: E0126 15:03:23.377178 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 26 15:03:23 crc kubenswrapper[4823]: E0126 15:03:23.377594 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4m9d6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-l9rwn_openstack-operators(038238a3-7348-4fd5-ae41-3473ff6cd14d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:03:23 crc kubenswrapper[4823]: E0126 15:03:23.381062 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn" podUID="038238a3-7348-4fd5-ae41-3473ff6cd14d" Jan 26 15:03:24 crc kubenswrapper[4823]: E0126 15:03:24.290754 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn" podUID="038238a3-7348-4fd5-ae41-3473ff6cd14d" Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.037766 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mj6wb"] Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.039626 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.051352 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj6wb"] Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.196978 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878a50a5-badf-4f81-bb50-0bf5873354df-catalog-content\") pod \"redhat-marketplace-mj6wb\" (UID: \"878a50a5-badf-4f81-bb50-0bf5873354df\") " pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.197047 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878a50a5-badf-4f81-bb50-0bf5873354df-utilities\") pod \"redhat-marketplace-mj6wb\" (UID: \"878a50a5-badf-4f81-bb50-0bf5873354df\") " pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.197116 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnf5x\" (UniqueName: \"kubernetes.io/projected/878a50a5-badf-4f81-bb50-0bf5873354df-kube-api-access-mnf5x\") pod \"redhat-marketplace-mj6wb\" (UID: \"878a50a5-badf-4f81-bb50-0bf5873354df\") " pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.298341 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878a50a5-badf-4f81-bb50-0bf5873354df-catalog-content\") pod \"redhat-marketplace-mj6wb\" (UID: \"878a50a5-badf-4f81-bb50-0bf5873354df\") " pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.298441 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878a50a5-badf-4f81-bb50-0bf5873354df-utilities\") pod \"redhat-marketplace-mj6wb\" (UID: \"878a50a5-badf-4f81-bb50-0bf5873354df\") " pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.298517 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnf5x\" (UniqueName: \"kubernetes.io/projected/878a50a5-badf-4f81-bb50-0bf5873354df-kube-api-access-mnf5x\") pod \"redhat-marketplace-mj6wb\" (UID: \"878a50a5-badf-4f81-bb50-0bf5873354df\") " pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.299794 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878a50a5-badf-4f81-bb50-0bf5873354df-utilities\") pod \"redhat-marketplace-mj6wb\" (UID: \"878a50a5-badf-4f81-bb50-0bf5873354df\") " pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.299820 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878a50a5-badf-4f81-bb50-0bf5873354df-catalog-content\") pod \"redhat-marketplace-mj6wb\" (UID: \"878a50a5-badf-4f81-bb50-0bf5873354df\") " pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.323539 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnf5x\" (UniqueName: \"kubernetes.io/projected/878a50a5-badf-4f81-bb50-0bf5873354df-kube-api-access-mnf5x\") pod \"redhat-marketplace-mj6wb\" (UID: \"878a50a5-badf-4f81-bb50-0bf5873354df\") " pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:26 crc kubenswrapper[4823]: I0126 15:03:26.369562 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:29 crc kubenswrapper[4823]: E0126 15:03:29.586496 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 26 15:03:29 crc kubenswrapper[4823]: E0126 15:03:29.587389 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6rqng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-tlwrn_openstack-operators(f0f8a8c9-f69c-4eb4-b9cd-5abc6aca4c50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:03:29 crc kubenswrapper[4823]: E0126 15:03:29.588671 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn" podUID="f0f8a8c9-f69c-4eb4-b9cd-5abc6aca4c50" Jan 26 15:03:30 crc kubenswrapper[4823]: E0126 15:03:30.336598 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn" podUID="f0f8a8c9-f69c-4eb4-b9cd-5abc6aca4c50" Jan 26 15:03:31 crc kubenswrapper[4823]: I0126 15:03:31.671312 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5"] Jan 26 15:03:32 crc kubenswrapper[4823]: I0126 15:03:32.371250 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp"] Jan 26 15:03:32 crc kubenswrapper[4823]: I0126 15:03:32.434400 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" event={"ID":"0e7ff918-aecf-4718-912b-d85f1dbd1799","Type":"ContainerStarted","Data":"707fdedd03b8e3ba7193521a0ba97def3fd14dd65da80eee83831555e8854b13"} Jan 26 15:03:32 crc kubenswrapper[4823]: I0126 15:03:32.586982 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds"] Jan 26 15:03:32 crc kubenswrapper[4823]: I0126 15:03:32.756706 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj6wb"] Jan 26 15:03:32 crc kubenswrapper[4823]: W0126 15:03:32.782091 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod878a50a5_badf_4f81_bb50_0bf5873354df.slice/crio-76ba0f1a98ff1fd455eef5ff6931a3010ad8b0872152d0808d4a27d64d0fc755 WatchSource:0}: Error finding container 76ba0f1a98ff1fd455eef5ff6931a3010ad8b0872152d0808d4a27d64d0fc755: Status 404 returned error can't find the container with id 76ba0f1a98ff1fd455eef5ff6931a3010ad8b0872152d0808d4a27d64d0fc755 Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.447322 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2" event={"ID":"bf60542f-f900-4d89-98f4-aeaa7878edda","Type":"ContainerStarted","Data":"602569ccffa76dd666e18692505944496c7ad16c1bfb5b0d8c0950aa89e4bd73"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.447918 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.451053 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj" event={"ID":"5b983d23-dbff-4482-b9fc-6fec60b1ab7f","Type":"ContainerStarted","Data":"8172a0e77032cba76eaa7d7613a21a8498d9a804d5cd36d03a994e098b68142a"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.451680 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.463210 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" event={"ID":"0e7ff918-aecf-4718-912b-d85f1dbd1799","Type":"ContainerStarted","Data":"dd40f3c6ade9a558692cf66b56d518cd28905633e5550c8a802382d15ece3e37"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.463355 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.465817 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj6wb" event={"ID":"878a50a5-badf-4f81-bb50-0bf5873354df","Type":"ContainerStarted","Data":"0d7fa24f4572e8c200c20f2f0dec09c404fee1e835c687aa2fede62eceb9f1ed"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.465893 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj6wb" event={"ID":"878a50a5-badf-4f81-bb50-0bf5873354df","Type":"ContainerStarted","Data":"76ba0f1a98ff1fd455eef5ff6931a3010ad8b0872152d0808d4a27d64d0fc755"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.467359 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" event={"ID":"2bc0a30b-01c7-4626-928b-fedcc58e373e","Type":"ContainerStarted","Data":"4d8be322f49cff34b3a5e36abb52b3486d19d583d7db28073391169e15123747"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.469304 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" event={"ID":"78a7e26b-4eac-4604-82ff-ce393cf816b6","Type":"ContainerStarted","Data":"b3939ac5f0101e34ac3ec3ffb0458fa0647eec067005a4c352d0ea5ed72b04a5"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.469575 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.474015 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" event={"ID":"ee032756-312e-4349-842b-f9bc642f7c08","Type":"ContainerStarted","Data":"2fc1db71e449851ac75d1eaf575368ecfab19264ff65e7c723874b1711c6cd3d"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.474244 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.479791 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc" event={"ID":"7cd351ff-1cb2-417e-9d45-5f16d7dc0a43","Type":"ContainerStarted","Data":"2239d3017aa18866b3cb7559bff3ddce821e0d6d8c89e7e1507245afaed84291"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.479901 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.492589 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7" event={"ID":"d30f23ef-3901-419c-afd2-bce286e7bb01","Type":"ContainerStarted","Data":"1d1d01c88792749348fd47ac4d54738a1b4d368ff604555ea9f071a9a976dcd4"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.493479 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.496474 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p" event={"ID":"f6145a22-466d-42fa-995e-7e6a8c4ffcc2","Type":"ContainerStarted","Data":"1f4d4f0676e966b8feb13ee3743e811bb34ebaa2823fb73d04c4ceda80cf9a89"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.497558 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.501927 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" event={"ID":"d89101c6-6415-47d7-8e82-65d8a7b3a961","Type":"ContainerStarted","Data":"759ee424e60dd4ad9a61e51fc829dc00316af943d8ec9caa80a124ccdff55478"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.503012 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.511163 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x" event={"ID":"13bd131b-e367-44a0-a552-bf7f2446f6c2","Type":"ContainerStarted","Data":"e829e9ec28772130010c3053042de6d3144cd9bf6b2d21c949ed4de3ba46556a"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.518114 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n" event={"ID":"c133cb3a-ff1b-4819-90a2-91d0cecb0ed9","Type":"ContainerStarted","Data":"c49c73e90c92d0285e91f7b505d72971088a7388bb72ee8a5aff2b18f57768fa"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.518954 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.520281 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq" event={"ID":"7534725a-0a1c-4ef0-b5ce-e6b758b4a174","Type":"ContainerStarted","Data":"11c2d7679439241f1beb078256f9961b625a9835d7e14b5efaecdfc41b045fb9"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.520700 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.521558 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" event={"ID":"b30af672-528b-4f1d-8bbf-e96085248217","Type":"ContainerStarted","Data":"d85b8d33a6b40f94f075cb39749008d8dd65c737c27fa2e84d687e58abfced47"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.522624 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" event={"ID":"2cdca653-4a4b-4452-9a00-5667349cb42a","Type":"ContainerStarted","Data":"6d8233c00de67c51624c804fb63f9b617408a32ce5104558d0ec276aca76862e"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.523048 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.550899 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" event={"ID":"38cf7a4f-36ed-4af7-a896-27f163d35986","Type":"ContainerStarted","Data":"547f463b4ea90a6d6bf761c9432fa92262b8724f34ec790e2d1ec69d9d992edc"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.551664 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.552731 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" event={"ID":"586e8217-d8bb-4d02-bfae-39db746fb0ca","Type":"ContainerStarted","Data":"37caca0983b4ac33fe1b240f01eede7cbc04d8f158cfe268e3d2b5a66def9e12"} Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.584147 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2" podStartSLOduration=10.044289962 podStartE2EDuration="31.584109315s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.287654896 +0000 UTC m=+980.973118001" lastFinishedPulling="2026-01-26 15:03:25.827474249 +0000 UTC m=+1002.512937354" observedRunningTime="2026-01-26 15:03:33.531903909 +0000 UTC m=+1010.217367034" watchObservedRunningTime="2026-01-26 15:03:33.584109315 +0000 UTC m=+1010.269572420" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.584282 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj" podStartSLOduration=10.430134028 podStartE2EDuration="31.584277759s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.675419455 +0000 UTC m=+981.360882560" lastFinishedPulling="2026-01-26 15:03:25.829563186 +0000 UTC m=+1002.515026291" observedRunningTime="2026-01-26 15:03:33.569916137 +0000 UTC m=+1010.255379242" watchObservedRunningTime="2026-01-26 15:03:33.584277759 +0000 UTC m=+1010.269740864" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.704028 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7" podStartSLOduration=10.165116374 podStartE2EDuration="31.704007432s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.288581431 +0000 UTC m=+980.974044536" lastFinishedPulling="2026-01-26 15:03:25.827472489 +0000 UTC m=+1002.512935594" observedRunningTime="2026-01-26 15:03:33.701852523 +0000 UTC m=+1010.387315628" watchObservedRunningTime="2026-01-26 15:03:33.704007432 +0000 UTC m=+1010.389470547" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.704226 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" podStartSLOduration=30.704221328 podStartE2EDuration="30.704221328s" podCreationTimestamp="2026-01-26 15:03:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:03:33.663764663 +0000 UTC m=+1010.349227788" watchObservedRunningTime="2026-01-26 15:03:33.704221328 +0000 UTC m=+1010.389684433" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.805151 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-2247x" podStartSLOduration=3.394689158 podStartE2EDuration="30.805116976s" podCreationTimestamp="2026-01-26 15:03:03 +0000 UTC" firstStartedPulling="2026-01-26 15:03:05.025637748 +0000 UTC m=+981.711100843" lastFinishedPulling="2026-01-26 15:03:32.436065546 +0000 UTC m=+1009.121528661" observedRunningTime="2026-01-26 15:03:33.740223802 +0000 UTC m=+1010.425686907" watchObservedRunningTime="2026-01-26 15:03:33.805116976 +0000 UTC m=+1010.490580081" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.807224 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" podStartSLOduration=3.064874394 podStartE2EDuration="30.807212884s" podCreationTimestamp="2026-01-26 15:03:03 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.958608675 +0000 UTC m=+981.644071780" lastFinishedPulling="2026-01-26 15:03:32.700947165 +0000 UTC m=+1009.386410270" observedRunningTime="2026-01-26 15:03:33.794508306 +0000 UTC m=+1010.479971421" watchObservedRunningTime="2026-01-26 15:03:33.807212884 +0000 UTC m=+1010.492675989" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.897973 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" podStartSLOduration=4.615686368 podStartE2EDuration="31.897941413s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:05.001317152 +0000 UTC m=+981.686780257" lastFinishedPulling="2026-01-26 15:03:32.283572157 +0000 UTC m=+1008.969035302" observedRunningTime="2026-01-26 15:03:33.856646095 +0000 UTC m=+1010.542109200" watchObservedRunningTime="2026-01-26 15:03:33.897941413 +0000 UTC m=+1010.583404518" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.940124 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq" podStartSLOduration=10.047141402 podStartE2EDuration="30.940095815s" podCreationTimestamp="2026-01-26 15:03:03 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.934547687 +0000 UTC m=+981.620010792" lastFinishedPulling="2026-01-26 15:03:25.8275021 +0000 UTC m=+1002.512965205" observedRunningTime="2026-01-26 15:03:33.938227755 +0000 UTC m=+1010.623690860" watchObservedRunningTime="2026-01-26 15:03:33.940095815 +0000 UTC m=+1010.625558920" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.940811 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p" podStartSLOduration=9.87687911 podStartE2EDuration="30.940805655s" podCreationTimestamp="2026-01-26 15:03:03 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.765752074 +0000 UTC m=+981.451215179" lastFinishedPulling="2026-01-26 15:03:25.829678619 +0000 UTC m=+1002.515141724" observedRunningTime="2026-01-26 15:03:33.901691486 +0000 UTC m=+1010.587154591" watchObservedRunningTime="2026-01-26 15:03:33.940805655 +0000 UTC m=+1010.626268760" Jan 26 15:03:33 crc kubenswrapper[4823]: I0126 15:03:33.990943 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" podStartSLOduration=4.6702342 podStartE2EDuration="31.990911585s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.961936236 +0000 UTC m=+981.647399341" lastFinishedPulling="2026-01-26 15:03:32.282613591 +0000 UTC m=+1008.968076726" observedRunningTime="2026-01-26 15:03:33.979947555 +0000 UTC m=+1010.665410660" watchObservedRunningTime="2026-01-26 15:03:33.990911585 +0000 UTC m=+1010.676374690" Jan 26 15:03:34 crc kubenswrapper[4823]: I0126 15:03:34.051432 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" podStartSLOduration=4.116599221 podStartE2EDuration="31.051357197s" podCreationTimestamp="2026-01-26 15:03:03 +0000 UTC" firstStartedPulling="2026-01-26 15:03:05.001668802 +0000 UTC m=+981.687131907" lastFinishedPulling="2026-01-26 15:03:31.936426778 +0000 UTC m=+1008.621889883" observedRunningTime="2026-01-26 15:03:34.040979973 +0000 UTC m=+1010.726443078" watchObservedRunningTime="2026-01-26 15:03:34.051357197 +0000 UTC m=+1010.736820302" Jan 26 15:03:34 crc kubenswrapper[4823]: I0126 15:03:34.083762 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc" podStartSLOduration=10.955303604000001 podStartE2EDuration="32.083743163s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.69903732 +0000 UTC m=+981.384500425" lastFinishedPulling="2026-01-26 15:03:25.827476879 +0000 UTC m=+1002.512939984" observedRunningTime="2026-01-26 15:03:34.078120859 +0000 UTC m=+1010.763583964" watchObservedRunningTime="2026-01-26 15:03:34.083743163 +0000 UTC m=+1010.769206268" Jan 26 15:03:34 crc kubenswrapper[4823]: I0126 15:03:34.116817 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n" podStartSLOduration=10.605688459 podStartE2EDuration="32.116793396s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.31852986 +0000 UTC m=+981.003992965" lastFinishedPulling="2026-01-26 15:03:25.829634797 +0000 UTC m=+1002.515097902" observedRunningTime="2026-01-26 15:03:34.100171872 +0000 UTC m=+1010.785634977" watchObservedRunningTime="2026-01-26 15:03:34.116793396 +0000 UTC m=+1010.802256501" Jan 26 15:03:34 crc kubenswrapper[4823]: I0126 15:03:34.245623 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" podStartSLOduration=5.260305969 podStartE2EDuration="32.245602117s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.949974569 +0000 UTC m=+981.635437674" lastFinishedPulling="2026-01-26 15:03:31.935270727 +0000 UTC m=+1008.620733822" observedRunningTime="2026-01-26 15:03:34.240781585 +0000 UTC m=+1010.926244680" watchObservedRunningTime="2026-01-26 15:03:34.245602117 +0000 UTC m=+1010.931065212" Jan 26 15:03:34 crc kubenswrapper[4823]: I0126 15:03:34.507921 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:03:34 crc kubenswrapper[4823]: I0126 15:03:34.508379 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:03:34 crc kubenswrapper[4823]: I0126 15:03:34.568027 4823 generic.go:334] "Generic (PLEG): container finished" podID="878a50a5-badf-4f81-bb50-0bf5873354df" containerID="0d7fa24f4572e8c200c20f2f0dec09c404fee1e835c687aa2fede62eceb9f1ed" exitCode=0 Jan 26 15:03:34 crc kubenswrapper[4823]: I0126 15:03:34.568252 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj6wb" event={"ID":"878a50a5-badf-4f81-bb50-0bf5873354df","Type":"ContainerDied","Data":"0d7fa24f4572e8c200c20f2f0dec09c404fee1e835c687aa2fede62eceb9f1ed"} Jan 26 15:03:34 crc kubenswrapper[4823]: I0126 15:03:34.618513 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" podStartSLOduration=5.336909493 podStartE2EDuration="32.618495629s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:05.001557569 +0000 UTC m=+981.687020674" lastFinishedPulling="2026-01-26 15:03:32.283143695 +0000 UTC m=+1008.968606810" observedRunningTime="2026-01-26 15:03:34.607837988 +0000 UTC m=+1011.293301093" watchObservedRunningTime="2026-01-26 15:03:34.618495629 +0000 UTC m=+1011.303958724" Jan 26 15:03:35 crc kubenswrapper[4823]: I0126 15:03:35.594268 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59" event={"ID":"0d61828c-0d9d-42d5-8fbe-dea8080b620e","Type":"ContainerStarted","Data":"56c6d11e1932f5d3c4294df1037b9e43e6fe0157f502352b630d4479d49424df"} Jan 26 15:03:35 crc kubenswrapper[4823]: I0126 15:03:35.595419 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59" Jan 26 15:03:35 crc kubenswrapper[4823]: I0126 15:03:35.599181 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz" event={"ID":"df95f821-a1f5-488a-a730-9c3c2f39fd4c","Type":"ContainerStarted","Data":"7c6ca68b757210fb9309f456f1e6db9d52f9a7cc035f89eaa03bf0cfeff14be1"} Jan 26 15:03:35 crc kubenswrapper[4823]: I0126 15:03:35.599766 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz" Jan 26 15:03:35 crc kubenswrapper[4823]: I0126 15:03:35.604628 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj6wb" event={"ID":"878a50a5-badf-4f81-bb50-0bf5873354df","Type":"ContainerStarted","Data":"653559c26e417dc7db19642aadf1b77e64b013762d78f7b3c4bb77919581239d"} Jan 26 15:03:35 crc kubenswrapper[4823]: I0126 15:03:35.717105 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz" podStartSLOduration=3.375692666 podStartE2EDuration="33.717076247s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.760892071 +0000 UTC m=+981.446355176" lastFinishedPulling="2026-01-26 15:03:35.102275532 +0000 UTC m=+1011.787738757" observedRunningTime="2026-01-26 15:03:35.667689647 +0000 UTC m=+1012.353152752" watchObservedRunningTime="2026-01-26 15:03:35.717076247 +0000 UTC m=+1012.402539352" Jan 26 15:03:35 crc kubenswrapper[4823]: I0126 15:03:35.729308 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59" podStartSLOduration=4.048826955 podStartE2EDuration="33.72927984s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.688613966 +0000 UTC m=+981.374077071" lastFinishedPulling="2026-01-26 15:03:34.369066851 +0000 UTC m=+1011.054529956" observedRunningTime="2026-01-26 15:03:35.692456894 +0000 UTC m=+1012.377919999" watchObservedRunningTime="2026-01-26 15:03:35.72927984 +0000 UTC m=+1012.414742945" Jan 26 15:03:36 crc kubenswrapper[4823]: I0126 15:03:36.621871 4823 generic.go:334] "Generic (PLEG): container finished" podID="878a50a5-badf-4f81-bb50-0bf5873354df" containerID="653559c26e417dc7db19642aadf1b77e64b013762d78f7b3c4bb77919581239d" exitCode=0 Jan 26 15:03:36 crc kubenswrapper[4823]: I0126 15:03:36.621952 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj6wb" event={"ID":"878a50a5-badf-4f81-bb50-0bf5873354df","Type":"ContainerDied","Data":"653559c26e417dc7db19642aadf1b77e64b013762d78f7b3c4bb77919581239d"} Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.651463 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj6wb" event={"ID":"878a50a5-badf-4f81-bb50-0bf5873354df","Type":"ContainerStarted","Data":"6f171d4e3dabf686a74e4aa23688f266237c7a56bab3836c5af4515627df1108"} Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.653809 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" event={"ID":"b30af672-528b-4f1d-8bbf-e96085248217","Type":"ContainerStarted","Data":"b970774d179e3e595dd4042aca6eaa0ccf18bc68ab802af615e63aa890aed6f9"} Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.653971 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.656532 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" event={"ID":"2bc0a30b-01c7-4626-928b-fedcc58e373e","Type":"ContainerStarted","Data":"2c544fa6497f11a35b495d9010268f0a2e64cb3e641c596e8f311be421a0f929"} Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.656626 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.658227 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" event={"ID":"16294fad-09f5-4781-83d7-82b25d1bc644","Type":"ContainerStarted","Data":"bba6611d734ff1c2a35564d36b7371c4e4511be5ecd010465233541a79916f32"} Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.658466 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.661468 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl" event={"ID":"394d042b-9673-4187-8e4a-b479dc07be27","Type":"ContainerStarted","Data":"506e48e646edbeae179c5fd46a2f9dea083dd366c19b70944015f4d3f2161b1c"} Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.661690 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl" Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.663984 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn" event={"ID":"038238a3-7348-4fd5-ae41-3473ff6cd14d","Type":"ContainerStarted","Data":"b846429732152c6ee925f2bc57fd3c9ea7e03a46ae228432ec3496a33e19381e"} Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.664215 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn" Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.678794 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mj6wb" podStartSLOduration=9.522308414 podStartE2EDuration="13.678764444s" podCreationTimestamp="2026-01-26 15:03:26 +0000 UTC" firstStartedPulling="2026-01-26 15:03:34.573409077 +0000 UTC m=+1011.258872182" lastFinishedPulling="2026-01-26 15:03:38.729865107 +0000 UTC m=+1015.415328212" observedRunningTime="2026-01-26 15:03:39.676142053 +0000 UTC m=+1016.361605158" watchObservedRunningTime="2026-01-26 15:03:39.678764444 +0000 UTC m=+1016.364227549" Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.718438 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" podStartSLOduration=4.152101388 podStartE2EDuration="37.718411798s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.724087135 +0000 UTC m=+981.409550240" lastFinishedPulling="2026-01-26 15:03:38.290397545 +0000 UTC m=+1014.975860650" observedRunningTime="2026-01-26 15:03:39.7100796 +0000 UTC m=+1016.395542715" watchObservedRunningTime="2026-01-26 15:03:39.718411798 +0000 UTC m=+1016.403874903" Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.735077 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" podStartSLOduration=31.944464825 podStartE2EDuration="37.735051163s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:32.728639722 +0000 UTC m=+1009.414102827" lastFinishedPulling="2026-01-26 15:03:38.51922605 +0000 UTC m=+1015.204689165" observedRunningTime="2026-01-26 15:03:39.732453152 +0000 UTC m=+1016.417916267" watchObservedRunningTime="2026-01-26 15:03:39.735051163 +0000 UTC m=+1016.420514268" Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.773850 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" podStartSLOduration=31.690489053 podStartE2EDuration="37.773816803s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:32.630782607 +0000 UTC m=+1009.316245712" lastFinishedPulling="2026-01-26 15:03:38.714110357 +0000 UTC m=+1015.399573462" observedRunningTime="2026-01-26 15:03:39.767950682 +0000 UTC m=+1016.453413797" watchObservedRunningTime="2026-01-26 15:03:39.773816803 +0000 UTC m=+1016.459279908" Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.789745 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn" podStartSLOduration=3.153851034 podStartE2EDuration="37.789715928s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.287976085 +0000 UTC m=+980.973439190" lastFinishedPulling="2026-01-26 15:03:38.923840979 +0000 UTC m=+1015.609304084" observedRunningTime="2026-01-26 15:03:39.784557216 +0000 UTC m=+1016.470020321" watchObservedRunningTime="2026-01-26 15:03:39.789715928 +0000 UTC m=+1016.475179033" Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.811733 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl" podStartSLOduration=3.809156744 podStartE2EDuration="37.811700328s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.287951794 +0000 UTC m=+980.973414899" lastFinishedPulling="2026-01-26 15:03:38.290495338 +0000 UTC m=+1014.975958483" observedRunningTime="2026-01-26 15:03:39.81067527 +0000 UTC m=+1016.496138375" watchObservedRunningTime="2026-01-26 15:03:39.811700328 +0000 UTC m=+1016.497163433" Jan 26 15:03:39 crc kubenswrapper[4823]: I0126 15:03:39.945348 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 15:03:41 crc kubenswrapper[4823]: I0126 15:03:41.563851 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:03:42 crc kubenswrapper[4823]: I0126 15:03:42.967524 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-rgql2" Jan 26 15:03:42 crc kubenswrapper[4823]: I0126 15:03:42.970551 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-f4qg7" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.071249 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-s7b2n" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.210264 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-p2vfc" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.305883 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-lrrrj" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.389927 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-xqx59" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.411418 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.428579 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.493484 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-9k7d5" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.537154 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.545752 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-2dzj6" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.611606 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-snjmz" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.791678 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.846305 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-5qs2p" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.876745 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-85cd9769bb-h4ckq" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.972818 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-948cd64bd-tpsth" Jan 26 15:03:43 crc kubenswrapper[4823]: I0126 15:03:43.988247 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-lltvv" Jan 26 15:03:46 crc kubenswrapper[4823]: I0126 15:03:46.370954 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:46 crc kubenswrapper[4823]: I0126 15:03:46.372562 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:46 crc kubenswrapper[4823]: I0126 15:03:46.432930 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:46 crc kubenswrapper[4823]: I0126 15:03:46.768160 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:46 crc kubenswrapper[4823]: I0126 15:03:46.821988 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj6wb"] Jan 26 15:03:48 crc kubenswrapper[4823]: I0126 15:03:48.737263 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mj6wb" podUID="878a50a5-badf-4f81-bb50-0bf5873354df" containerName="registry-server" containerID="cri-o://6f171d4e3dabf686a74e4aa23688f266237c7a56bab3836c5af4515627df1108" gracePeriod=2 Jan 26 15:03:48 crc kubenswrapper[4823]: I0126 15:03:48.829594 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-zcxds" Jan 26 15:03:49 crc kubenswrapper[4823]: I0126 15:03:49.257749 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp" Jan 26 15:03:50 crc kubenswrapper[4823]: I0126 15:03:50.763591 4823 generic.go:334] "Generic (PLEG): container finished" podID="878a50a5-badf-4f81-bb50-0bf5873354df" containerID="6f171d4e3dabf686a74e4aa23688f266237c7a56bab3836c5af4515627df1108" exitCode=0 Jan 26 15:03:50 crc kubenswrapper[4823]: I0126 15:03:50.763632 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj6wb" event={"ID":"878a50a5-badf-4f81-bb50-0bf5873354df","Type":"ContainerDied","Data":"6f171d4e3dabf686a74e4aa23688f266237c7a56bab3836c5af4515627df1108"} Jan 26 15:03:52 crc kubenswrapper[4823]: I0126 15:03:52.940955 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-szbhl" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.038270 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-l9rwn" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.589241 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.612208 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnf5x\" (UniqueName: \"kubernetes.io/projected/878a50a5-badf-4f81-bb50-0bf5873354df-kube-api-access-mnf5x\") pod \"878a50a5-badf-4f81-bb50-0bf5873354df\" (UID: \"878a50a5-badf-4f81-bb50-0bf5873354df\") " Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.612712 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878a50a5-badf-4f81-bb50-0bf5873354df-utilities\") pod \"878a50a5-badf-4f81-bb50-0bf5873354df\" (UID: \"878a50a5-badf-4f81-bb50-0bf5873354df\") " Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.613041 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878a50a5-badf-4f81-bb50-0bf5873354df-catalog-content\") pod \"878a50a5-badf-4f81-bb50-0bf5873354df\" (UID: \"878a50a5-badf-4f81-bb50-0bf5873354df\") " Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.613702 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878a50a5-badf-4f81-bb50-0bf5873354df-utilities" (OuterVolumeSpecName: "utilities") pod "878a50a5-badf-4f81-bb50-0bf5873354df" (UID: "878a50a5-badf-4f81-bb50-0bf5873354df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.630380 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878a50a5-badf-4f81-bb50-0bf5873354df-kube-api-access-mnf5x" (OuterVolumeSpecName: "kube-api-access-mnf5x") pod "878a50a5-badf-4f81-bb50-0bf5873354df" (UID: "878a50a5-badf-4f81-bb50-0bf5873354df"). InnerVolumeSpecName "kube-api-access-mnf5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.653849 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878a50a5-badf-4f81-bb50-0bf5873354df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "878a50a5-badf-4f81-bb50-0bf5873354df" (UID: "878a50a5-badf-4f81-bb50-0bf5873354df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.714105 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/878a50a5-badf-4f81-bb50-0bf5873354df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.714149 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnf5x\" (UniqueName: \"kubernetes.io/projected/878a50a5-badf-4f81-bb50-0bf5873354df-kube-api-access-mnf5x\") on node \"crc\" DevicePath \"\"" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.714164 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/878a50a5-badf-4f81-bb50-0bf5873354df-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.789037 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mj6wb" event={"ID":"878a50a5-badf-4f81-bb50-0bf5873354df","Type":"ContainerDied","Data":"76ba0f1a98ff1fd455eef5ff6931a3010ad8b0872152d0808d4a27d64d0fc755"} Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.789120 4823 scope.go:117] "RemoveContainer" containerID="6f171d4e3dabf686a74e4aa23688f266237c7a56bab3836c5af4515627df1108" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.789205 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mj6wb" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.815236 4823 scope.go:117] "RemoveContainer" containerID="653559c26e417dc7db19642aadf1b77e64b013762d78f7b3c4bb77919581239d" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.848939 4823 scope.go:117] "RemoveContainer" containerID="0d7fa24f4572e8c200c20f2f0dec09c404fee1e835c687aa2fede62eceb9f1ed" Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.856926 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj6wb"] Jan 26 15:03:53 crc kubenswrapper[4823]: I0126 15:03:53.867973 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mj6wb"] Jan 26 15:03:54 crc kubenswrapper[4823]: I0126 15:03:54.804521 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn" event={"ID":"f0f8a8c9-f69c-4eb4-b9cd-5abc6aca4c50","Type":"ContainerStarted","Data":"6b47b4f69408a541906315b9efd90b63a8ed711bde9d54a8fc01cfdf42a6ec73"} Jan 26 15:03:54 crc kubenswrapper[4823]: I0126 15:03:54.806694 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn" Jan 26 15:03:55 crc kubenswrapper[4823]: I0126 15:03:55.571216 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878a50a5-badf-4f81-bb50-0bf5873354df" path="/var/lib/kubelet/pods/878a50a5-badf-4f81-bb50-0bf5873354df/volumes" Jan 26 15:04:03 crc kubenswrapper[4823]: I0126 15:04:03.336724 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn" Jan 26 15:04:03 crc kubenswrapper[4823]: I0126 15:04:03.364767 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-tlwrn" podStartSLOduration=11.814702096 podStartE2EDuration="1m1.364737332s" podCreationTimestamp="2026-01-26 15:03:02 +0000 UTC" firstStartedPulling="2026-01-26 15:03:04.732863725 +0000 UTC m=+981.418326820" lastFinishedPulling="2026-01-26 15:03:54.282898951 +0000 UTC m=+1030.968362056" observedRunningTime="2026-01-26 15:03:54.82760158 +0000 UTC m=+1031.513064715" watchObservedRunningTime="2026-01-26 15:04:03.364737332 +0000 UTC m=+1040.050200447" Jan 26 15:04:04 crc kubenswrapper[4823]: I0126 15:04:04.508063 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:04:04 crc kubenswrapper[4823]: I0126 15:04:04.508691 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:04:04 crc kubenswrapper[4823]: I0126 15:04:04.508774 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 15:04:04 crc kubenswrapper[4823]: I0126 15:04:04.509707 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ced2fb81c930871220e3d3d613f6291ccb0288d32b2aebbbf2e414d7715540a7"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:04:04 crc kubenswrapper[4823]: I0126 15:04:04.509835 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://ced2fb81c930871220e3d3d613f6291ccb0288d32b2aebbbf2e414d7715540a7" gracePeriod=600 Jan 26 15:04:04 crc kubenswrapper[4823]: I0126 15:04:04.894012 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="ced2fb81c930871220e3d3d613f6291ccb0288d32b2aebbbf2e414d7715540a7" exitCode=0 Jan 26 15:04:04 crc kubenswrapper[4823]: I0126 15:04:04.894133 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"ced2fb81c930871220e3d3d613f6291ccb0288d32b2aebbbf2e414d7715540a7"} Jan 26 15:04:04 crc kubenswrapper[4823]: I0126 15:04:04.894567 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"6349657ac17d7db3f38b64f373c6e824084e4cf157cbb0ce8765094b3f648c48"} Jan 26 15:04:04 crc kubenswrapper[4823]: I0126 15:04:04.894598 4823 scope.go:117] "RemoveContainer" containerID="ec7366454a163fe04376dec76b6dddb0ad3a342a392aad185b53b45a854cd90d" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.346423 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dkxkd"] Jan 26 15:04:21 crc kubenswrapper[4823]: E0126 15:04:21.347465 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878a50a5-badf-4f81-bb50-0bf5873354df" containerName="extract-content" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.347482 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="878a50a5-badf-4f81-bb50-0bf5873354df" containerName="extract-content" Jan 26 15:04:21 crc kubenswrapper[4823]: E0126 15:04:21.347501 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878a50a5-badf-4f81-bb50-0bf5873354df" containerName="registry-server" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.347507 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="878a50a5-badf-4f81-bb50-0bf5873354df" containerName="registry-server" Jan 26 15:04:21 crc kubenswrapper[4823]: E0126 15:04:21.347520 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878a50a5-badf-4f81-bb50-0bf5873354df" containerName="extract-utilities" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.347526 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="878a50a5-badf-4f81-bb50-0bf5873354df" containerName="extract-utilities" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.347664 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="878a50a5-badf-4f81-bb50-0bf5873354df" containerName="registry-server" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.348489 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.353325 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-khs8k" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.353881 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.354126 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.354826 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.359866 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dkxkd"] Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.454520 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zfc4r"] Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.455922 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.461855 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.517250 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zfc4r"] Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.529705 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7vhm\" (UniqueName: \"kubernetes.io/projected/84dc7a24-9dd4-4a59-85f2-0283786628d4-kube-api-access-g7vhm\") pod \"dnsmasq-dns-675f4bcbfc-dkxkd\" (UID: \"84dc7a24-9dd4-4a59-85f2-0283786628d4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.529806 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84dc7a24-9dd4-4a59-85f2-0283786628d4-config\") pod \"dnsmasq-dns-675f4bcbfc-dkxkd\" (UID: \"84dc7a24-9dd4-4a59-85f2-0283786628d4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.631682 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8clc\" (UniqueName: \"kubernetes.io/projected/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-kube-api-access-f8clc\") pod \"dnsmasq-dns-78dd6ddcc-zfc4r\" (UID: \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.631763 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zfc4r\" (UID: \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.631798 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7vhm\" (UniqueName: \"kubernetes.io/projected/84dc7a24-9dd4-4a59-85f2-0283786628d4-kube-api-access-g7vhm\") pod \"dnsmasq-dns-675f4bcbfc-dkxkd\" (UID: \"84dc7a24-9dd4-4a59-85f2-0283786628d4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.632076 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84dc7a24-9dd4-4a59-85f2-0283786628d4-config\") pod \"dnsmasq-dns-675f4bcbfc-dkxkd\" (UID: \"84dc7a24-9dd4-4a59-85f2-0283786628d4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.632171 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-config\") pod \"dnsmasq-dns-78dd6ddcc-zfc4r\" (UID: \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.633269 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84dc7a24-9dd4-4a59-85f2-0283786628d4-config\") pod \"dnsmasq-dns-675f4bcbfc-dkxkd\" (UID: \"84dc7a24-9dd4-4a59-85f2-0283786628d4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.666646 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7vhm\" (UniqueName: \"kubernetes.io/projected/84dc7a24-9dd4-4a59-85f2-0283786628d4-kube-api-access-g7vhm\") pod \"dnsmasq-dns-675f4bcbfc-dkxkd\" (UID: \"84dc7a24-9dd4-4a59-85f2-0283786628d4\") " pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.676649 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.736753 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-config\") pod \"dnsmasq-dns-78dd6ddcc-zfc4r\" (UID: \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.738018 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8clc\" (UniqueName: \"kubernetes.io/projected/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-kube-api-access-f8clc\") pod \"dnsmasq-dns-78dd6ddcc-zfc4r\" (UID: \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.738068 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zfc4r\" (UID: \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.740817 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-zfc4r\" (UID: \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.742709 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-config\") pod \"dnsmasq-dns-78dd6ddcc-zfc4r\" (UID: \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.762164 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8clc\" (UniqueName: \"kubernetes.io/projected/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-kube-api-access-f8clc\") pod \"dnsmasq-dns-78dd6ddcc-zfc4r\" (UID: \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\") " pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:21 crc kubenswrapper[4823]: I0126 15:04:21.773023 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:22 crc kubenswrapper[4823]: I0126 15:04:22.159732 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dkxkd"] Jan 26 15:04:22 crc kubenswrapper[4823]: I0126 15:04:22.278145 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zfc4r"] Jan 26 15:04:22 crc kubenswrapper[4823]: W0126 15:04:22.282738 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd597d5f2_3e6d_4f78_bca6_d4b11cc244fe.slice/crio-9c1064988b25b0d0d5358d2f6bdb1524bb82413476396b20b7b14d81a1b5b830 WatchSource:0}: Error finding container 9c1064988b25b0d0d5358d2f6bdb1524bb82413476396b20b7b14d81a1b5b830: Status 404 returned error can't find the container with id 9c1064988b25b0d0d5358d2f6bdb1524bb82413476396b20b7b14d81a1b5b830 Jan 26 15:04:23 crc kubenswrapper[4823]: I0126 15:04:23.092374 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" event={"ID":"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe","Type":"ContainerStarted","Data":"9c1064988b25b0d0d5358d2f6bdb1524bb82413476396b20b7b14d81a1b5b830"} Jan 26 15:04:23 crc kubenswrapper[4823]: I0126 15:04:23.094669 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" event={"ID":"84dc7a24-9dd4-4a59-85f2-0283786628d4","Type":"ContainerStarted","Data":"60f24c82c1fc1b3013aa502b0435de26df6daaf1d65fd4e802b58b07ae94e028"} Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.111806 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dkxkd"] Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.153639 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krn8r"] Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.168019 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krn8r"] Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.169133 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.188251 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a001b80-e9c3-4de1-9a2a-d368adac1975-dns-svc\") pod \"dnsmasq-dns-666b6646f7-krn8r\" (UID: \"9a001b80-e9c3-4de1-9a2a-d368adac1975\") " pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.188551 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c5h5\" (UniqueName: \"kubernetes.io/projected/9a001b80-e9c3-4de1-9a2a-d368adac1975-kube-api-access-2c5h5\") pod \"dnsmasq-dns-666b6646f7-krn8r\" (UID: \"9a001b80-e9c3-4de1-9a2a-d368adac1975\") " pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.188758 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a001b80-e9c3-4de1-9a2a-d368adac1975-config\") pod \"dnsmasq-dns-666b6646f7-krn8r\" (UID: \"9a001b80-e9c3-4de1-9a2a-d368adac1975\") " pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.289842 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a001b80-e9c3-4de1-9a2a-d368adac1975-dns-svc\") pod \"dnsmasq-dns-666b6646f7-krn8r\" (UID: \"9a001b80-e9c3-4de1-9a2a-d368adac1975\") " pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.290420 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c5h5\" (UniqueName: \"kubernetes.io/projected/9a001b80-e9c3-4de1-9a2a-d368adac1975-kube-api-access-2c5h5\") pod \"dnsmasq-dns-666b6646f7-krn8r\" (UID: \"9a001b80-e9c3-4de1-9a2a-d368adac1975\") " pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.290468 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a001b80-e9c3-4de1-9a2a-d368adac1975-config\") pod \"dnsmasq-dns-666b6646f7-krn8r\" (UID: \"9a001b80-e9c3-4de1-9a2a-d368adac1975\") " pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.291196 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a001b80-e9c3-4de1-9a2a-d368adac1975-dns-svc\") pod \"dnsmasq-dns-666b6646f7-krn8r\" (UID: \"9a001b80-e9c3-4de1-9a2a-d368adac1975\") " pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.291633 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a001b80-e9c3-4de1-9a2a-d368adac1975-config\") pod \"dnsmasq-dns-666b6646f7-krn8r\" (UID: \"9a001b80-e9c3-4de1-9a2a-d368adac1975\") " pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.318264 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c5h5\" (UniqueName: \"kubernetes.io/projected/9a001b80-e9c3-4de1-9a2a-d368adac1975-kube-api-access-2c5h5\") pod \"dnsmasq-dns-666b6646f7-krn8r\" (UID: \"9a001b80-e9c3-4de1-9a2a-d368adac1975\") " pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.511802 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.543731 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zfc4r"] Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.627610 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5hjqg"] Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.629457 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.633293 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5hjqg"] Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.801993 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzrk\" (UniqueName: \"kubernetes.io/projected/810199cf-7934-4e66-91b7-293883b42f7b-kube-api-access-sjzrk\") pod \"dnsmasq-dns-57d769cc4f-5hjqg\" (UID: \"810199cf-7934-4e66-91b7-293883b42f7b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.802616 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/810199cf-7934-4e66-91b7-293883b42f7b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5hjqg\" (UID: \"810199cf-7934-4e66-91b7-293883b42f7b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.803122 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/810199cf-7934-4e66-91b7-293883b42f7b-config\") pod \"dnsmasq-dns-57d769cc4f-5hjqg\" (UID: \"810199cf-7934-4e66-91b7-293883b42f7b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.906719 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/810199cf-7934-4e66-91b7-293883b42f7b-config\") pod \"dnsmasq-dns-57d769cc4f-5hjqg\" (UID: \"810199cf-7934-4e66-91b7-293883b42f7b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.906837 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzrk\" (UniqueName: \"kubernetes.io/projected/810199cf-7934-4e66-91b7-293883b42f7b-kube-api-access-sjzrk\") pod \"dnsmasq-dns-57d769cc4f-5hjqg\" (UID: \"810199cf-7934-4e66-91b7-293883b42f7b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.906934 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/810199cf-7934-4e66-91b7-293883b42f7b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5hjqg\" (UID: \"810199cf-7934-4e66-91b7-293883b42f7b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.907712 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/810199cf-7934-4e66-91b7-293883b42f7b-config\") pod \"dnsmasq-dns-57d769cc4f-5hjqg\" (UID: \"810199cf-7934-4e66-91b7-293883b42f7b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.910222 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/810199cf-7934-4e66-91b7-293883b42f7b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-5hjqg\" (UID: \"810199cf-7934-4e66-91b7-293883b42f7b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.934992 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzrk\" (UniqueName: \"kubernetes.io/projected/810199cf-7934-4e66-91b7-293883b42f7b-kube-api-access-sjzrk\") pod \"dnsmasq-dns-57d769cc4f-5hjqg\" (UID: \"810199cf-7934-4e66-91b7-293883b42f7b\") " pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:24 crc kubenswrapper[4823]: I0126 15:04:24.972122 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.134349 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krn8r"] Jan 26 15:04:25 crc kubenswrapper[4823]: W0126 15:04:25.156071 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a001b80_e9c3_4de1_9a2a_d368adac1975.slice/crio-ccf4d9043d06f4e053b61635a8c9af66a104f14b9c3d986bae4c1957ca3aa6be WatchSource:0}: Error finding container ccf4d9043d06f4e053b61635a8c9af66a104f14b9c3d986bae4c1957ca3aa6be: Status 404 returned error can't find the container with id ccf4d9043d06f4e053b61635a8c9af66a104f14b9c3d986bae4c1957ca3aa6be Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.289823 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.295144 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.299396 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.299700 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.300748 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.300899 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.301022 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.301031 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q2xzp" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.301054 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.309834 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.419938 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.419985 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.420022 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-config-data\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.420041 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.420108 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.420139 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c43c52fb-3ef3-4d3e-984d-642a9bc09469-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.420167 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c43c52fb-3ef3-4d3e-984d-642a9bc09469-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.420190 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.420251 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbngj\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-kube-api-access-jbngj\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.420422 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.420463 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.489575 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5hjqg"] Jan 26 15:04:25 crc kubenswrapper[4823]: W0126 15:04:25.500500 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod810199cf_7934_4e66_91b7_293883b42f7b.slice/crio-afde8a3bf63341fff2a6d93f38199443b5486b3306a339415bc0fe0b5976410f WatchSource:0}: Error finding container afde8a3bf63341fff2a6d93f38199443b5486b3306a339415bc0fe0b5976410f: Status 404 returned error can't find the container with id afde8a3bf63341fff2a6d93f38199443b5486b3306a339415bc0fe0b5976410f Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.522019 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.522543 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.522595 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-config-data\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.522620 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.522706 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.522724 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c43c52fb-3ef3-4d3e-984d-642a9bc09469-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.522790 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c43c52fb-3ef3-4d3e-984d-642a9bc09469-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.522846 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.522882 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbngj\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-kube-api-access-jbngj\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.522971 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.523128 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.524918 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.525176 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-config-data\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.525501 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.528271 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.528846 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-server-conf\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.530768 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c43c52fb-3ef3-4d3e-984d-642a9bc09469-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.531003 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.531513 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c43c52fb-3ef3-4d3e-984d-642a9bc09469-pod-info\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.531796 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.533557 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.546186 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbngj\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-kube-api-access-jbngj\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.562562 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.672914 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.721780 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.730915 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.738826 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.739719 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.748142 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.748171 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rjxvp" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.748792 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.748846 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.750639 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.762403 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.837538 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.837611 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.837857 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.840451 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.840576 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.840739 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a82c17e1-38ac-4448-b3ff-b18df77c521b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.840806 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z85zf\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-kube-api-access-z85zf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.841263 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.841321 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.841442 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.841513 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a82c17e1-38ac-4448-b3ff-b18df77c521b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.944847 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a82c17e1-38ac-4448-b3ff-b18df77c521b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.944906 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z85zf\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-kube-api-access-z85zf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.944940 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.944966 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.945006 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.945029 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a82c17e1-38ac-4448-b3ff-b18df77c521b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.945070 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.945687 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.945913 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.946013 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.946036 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.946075 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.946203 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.947653 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.948039 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.948047 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.948525 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.950417 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a82c17e1-38ac-4448-b3ff-b18df77c521b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.950920 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.952629 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a82c17e1-38ac-4448-b3ff-b18df77c521b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.968152 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.968613 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z85zf\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-kube-api-access-z85zf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:25 crc kubenswrapper[4823]: I0126 15:04:25.969514 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.078847 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.203975 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" event={"ID":"810199cf-7934-4e66-91b7-293883b42f7b","Type":"ContainerStarted","Data":"afde8a3bf63341fff2a6d93f38199443b5486b3306a339415bc0fe0b5976410f"} Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.212559 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" event={"ID":"9a001b80-e9c3-4de1-9a2a-d368adac1975","Type":"ContainerStarted","Data":"ccf4d9043d06f4e053b61635a8c9af66a104f14b9c3d986bae4c1957ca3aa6be"} Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.368139 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.516015 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.896208 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.898101 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.901063 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.901434 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-pz9bg" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.901461 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.901976 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.902135 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.907114 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.970043 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29094f76-d918-4ee5-8064-52c459a4bdce-kolla-config\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.970316 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.970418 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29094f76-d918-4ee5-8064-52c459a4bdce-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.970498 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29094f76-d918-4ee5-8064-52c459a4bdce-operator-scripts\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.970772 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29094f76-d918-4ee5-8064-52c459a4bdce-config-data-default\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.970833 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvgw4\" (UniqueName: \"kubernetes.io/projected/29094f76-d918-4ee5-8064-52c459a4bdce-kube-api-access-gvgw4\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.970876 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29094f76-d918-4ee5-8064-52c459a4bdce-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:26 crc kubenswrapper[4823]: I0126 15:04:26.971149 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29094f76-d918-4ee5-8064-52c459a4bdce-config-data-generated\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.076793 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29094f76-d918-4ee5-8064-52c459a4bdce-config-data-default\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.076851 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvgw4\" (UniqueName: \"kubernetes.io/projected/29094f76-d918-4ee5-8064-52c459a4bdce-kube-api-access-gvgw4\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.076873 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29094f76-d918-4ee5-8064-52c459a4bdce-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.076941 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29094f76-d918-4ee5-8064-52c459a4bdce-config-data-generated\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.078710 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29094f76-d918-4ee5-8064-52c459a4bdce-kolla-config\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.078924 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.078957 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29094f76-d918-4ee5-8064-52c459a4bdce-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.083284 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29094f76-d918-4ee5-8064-52c459a4bdce-operator-scripts\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.078025 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/29094f76-d918-4ee5-8064-52c459a4bdce-config-data-default\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.077810 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/29094f76-d918-4ee5-8064-52c459a4bdce-config-data-generated\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.083961 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.084479 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29094f76-d918-4ee5-8064-52c459a4bdce-kolla-config\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.097648 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/29094f76-d918-4ee5-8064-52c459a4bdce-operator-scripts\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.107450 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/29094f76-d918-4ee5-8064-52c459a4bdce-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.107494 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29094f76-d918-4ee5-8064-52c459a4bdce-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.132974 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvgw4\" (UniqueName: \"kubernetes.io/projected/29094f76-d918-4ee5-8064-52c459a4bdce-kube-api-access-gvgw4\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.133967 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"29094f76-d918-4ee5-8064-52c459a4bdce\") " pod="openstack/openstack-galera-0" Jan 26 15:04:27 crc kubenswrapper[4823]: I0126 15:04:27.231880 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.182129 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.183693 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.187594 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-kvbfq" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.188302 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.188529 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.193662 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.218971 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.310303 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dd872d0-a323-4968-9e53-37fefc8adc23-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.311051 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd872d0-a323-4968-9e53-37fefc8adc23-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.311163 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffp2k\" (UniqueName: \"kubernetes.io/projected/7dd872d0-a323-4968-9e53-37fefc8adc23-kube-api-access-ffp2k\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.311228 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd872d0-a323-4968-9e53-37fefc8adc23-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.311292 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dd872d0-a323-4968-9e53-37fefc8adc23-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.311402 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dd872d0-a323-4968-9e53-37fefc8adc23-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.311465 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.312161 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd872d0-a323-4968-9e53-37fefc8adc23-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.414517 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffp2k\" (UniqueName: \"kubernetes.io/projected/7dd872d0-a323-4968-9e53-37fefc8adc23-kube-api-access-ffp2k\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.414593 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd872d0-a323-4968-9e53-37fefc8adc23-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.414622 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dd872d0-a323-4968-9e53-37fefc8adc23-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.414663 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dd872d0-a323-4968-9e53-37fefc8adc23-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.414693 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.414713 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd872d0-a323-4968-9e53-37fefc8adc23-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.414745 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dd872d0-a323-4968-9e53-37fefc8adc23-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.414773 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd872d0-a323-4968-9e53-37fefc8adc23-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.423293 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.423827 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7dd872d0-a323-4968-9e53-37fefc8adc23-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.424190 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7dd872d0-a323-4968-9e53-37fefc8adc23-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.424963 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7dd872d0-a323-4968-9e53-37fefc8adc23-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.430382 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd872d0-a323-4968-9e53-37fefc8adc23-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.434779 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd872d0-a323-4968-9e53-37fefc8adc23-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.443088 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7dd872d0-a323-4968-9e53-37fefc8adc23-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.455000 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffp2k\" (UniqueName: \"kubernetes.io/projected/7dd872d0-a323-4968-9e53-37fefc8adc23-kube-api-access-ffp2k\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.477747 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7dd872d0-a323-4968-9e53-37fefc8adc23\") " pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.532790 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.535763 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.539214 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.540016 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-998bb" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.540639 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.540984 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.592554 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.639463 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1cbd37a3-3241-4e20-9d2b-c73873212cb1-config-data\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.639571 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cbd37a3-3241-4e20-9d2b-c73873212cb1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.639598 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1cbd37a3-3241-4e20-9d2b-c73873212cb1-kolla-config\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.639616 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cbd37a3-3241-4e20-9d2b-c73873212cb1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.639659 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmrxz\" (UniqueName: \"kubernetes.io/projected/1cbd37a3-3241-4e20-9d2b-c73873212cb1-kube-api-access-lmrxz\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.742846 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cbd37a3-3241-4e20-9d2b-c73873212cb1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.742903 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1cbd37a3-3241-4e20-9d2b-c73873212cb1-kolla-config\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.742929 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cbd37a3-3241-4e20-9d2b-c73873212cb1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.742968 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmrxz\" (UniqueName: \"kubernetes.io/projected/1cbd37a3-3241-4e20-9d2b-c73873212cb1-kube-api-access-lmrxz\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.743131 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1cbd37a3-3241-4e20-9d2b-c73873212cb1-config-data\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.744146 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1cbd37a3-3241-4e20-9d2b-c73873212cb1-kolla-config\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.744431 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1cbd37a3-3241-4e20-9d2b-c73873212cb1-config-data\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.750034 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1cbd37a3-3241-4e20-9d2b-c73873212cb1-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.770533 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmrxz\" (UniqueName: \"kubernetes.io/projected/1cbd37a3-3241-4e20-9d2b-c73873212cb1-kube-api-access-lmrxz\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.772071 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1cbd37a3-3241-4e20-9d2b-c73873212cb1-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1cbd37a3-3241-4e20-9d2b-c73873212cb1\") " pod="openstack/memcached-0" Jan 26 15:04:28 crc kubenswrapper[4823]: I0126 15:04:28.946284 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 26 15:04:30 crc kubenswrapper[4823]: I0126 15:04:30.854253 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:04:30 crc kubenswrapper[4823]: I0126 15:04:30.856439 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:04:30 crc kubenswrapper[4823]: I0126 15:04:30.862864 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:04:30 crc kubenswrapper[4823]: I0126 15:04:30.878969 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-m7txl" Jan 26 15:04:30 crc kubenswrapper[4823]: I0126 15:04:30.982118 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpcpt\" (UniqueName: \"kubernetes.io/projected/19474244-0d03-4e7f-8a6d-abd64aafaff9-kube-api-access-lpcpt\") pod \"kube-state-metrics-0\" (UID: \"19474244-0d03-4e7f-8a6d-abd64aafaff9\") " pod="openstack/kube-state-metrics-0" Jan 26 15:04:31 crc kubenswrapper[4823]: I0126 15:04:31.084494 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpcpt\" (UniqueName: \"kubernetes.io/projected/19474244-0d03-4e7f-8a6d-abd64aafaff9-kube-api-access-lpcpt\") pod \"kube-state-metrics-0\" (UID: \"19474244-0d03-4e7f-8a6d-abd64aafaff9\") " pod="openstack/kube-state-metrics-0" Jan 26 15:04:31 crc kubenswrapper[4823]: I0126 15:04:31.106782 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpcpt\" (UniqueName: \"kubernetes.io/projected/19474244-0d03-4e7f-8a6d-abd64aafaff9-kube-api-access-lpcpt\") pod \"kube-state-metrics-0\" (UID: \"19474244-0d03-4e7f-8a6d-abd64aafaff9\") " pod="openstack/kube-state-metrics-0" Jan 26 15:04:31 crc kubenswrapper[4823]: I0126 15:04:31.190906 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:04:31 crc kubenswrapper[4823]: I0126 15:04:31.263484 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a82c17e1-38ac-4448-b3ff-b18df77c521b","Type":"ContainerStarted","Data":"9d9a06e749ccfa1a2ed4c1d154370b2c31c7b75b39d5f07a17e57287595998b1"} Jan 26 15:04:31 crc kubenswrapper[4823]: I0126 15:04:31.265180 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c43c52fb-3ef3-4d3e-984d-642a9bc09469","Type":"ContainerStarted","Data":"15a8b2464e7d1cd54b8221ecac2a2bbc9c768f330508648a9bf5df254f5739fb"} Jan 26 15:04:34 crc kubenswrapper[4823]: I0126 15:04:34.935709 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s4g2z"] Jan 26 15:04:34 crc kubenswrapper[4823]: I0126 15:04:34.940818 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:34 crc kubenswrapper[4823]: I0126 15:04:34.946173 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 26 15:04:34 crc kubenswrapper[4823]: I0126 15:04:34.946497 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s4g2z"] Jan 26 15:04:34 crc kubenswrapper[4823]: I0126 15:04:34.947198 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-kgknv" Jan 26 15:04:34 crc kubenswrapper[4823]: I0126 15:04:34.948978 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 26 15:04:34 crc kubenswrapper[4823]: I0126 15:04:34.953412 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-twc9z"] Jan 26 15:04:34 crc kubenswrapper[4823]: I0126 15:04:34.955447 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:34 crc kubenswrapper[4823]: I0126 15:04:34.977857 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-twc9z"] Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065160 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxnhz\" (UniqueName: \"kubernetes.io/projected/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-kube-api-access-nxnhz\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065231 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-var-log-ovn\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065252 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-etc-ovs\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065358 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-ovn-controller-tls-certs\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065489 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-scripts\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065556 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-scripts\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065650 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-var-run\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065747 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ft5s\" (UniqueName: \"kubernetes.io/projected/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-kube-api-access-7ft5s\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065799 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-var-run-ovn\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065857 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-combined-ca-bundle\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065907 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-var-run\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065960 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-var-lib\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.065991 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-var-log\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168111 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-combined-ca-bundle\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168182 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-var-run\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168215 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-var-lib\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168245 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-var-log\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168285 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxnhz\" (UniqueName: \"kubernetes.io/projected/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-kube-api-access-nxnhz\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168308 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-etc-ovs\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168327 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-var-log-ovn\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168358 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-ovn-controller-tls-certs\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168411 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-scripts\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168429 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-scripts\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168457 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-var-run\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168490 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ft5s\" (UniqueName: \"kubernetes.io/projected/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-kube-api-access-7ft5s\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.168508 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-var-run-ovn\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.169286 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-var-run-ovn\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.170703 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-var-log-ovn\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.170923 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-var-log\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.171143 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-var-run\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.171378 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-var-lib\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.171510 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-etc-ovs\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.172037 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-var-run\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.175353 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-scripts\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.175880 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-scripts\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.177042 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-combined-ca-bundle\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.191445 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxnhz\" (UniqueName: \"kubernetes.io/projected/2a39ae8b-f50c-492b-9d4c-308b9b4c87d2-kube-api-access-nxnhz\") pod \"ovn-controller-ovs-twc9z\" (UID: \"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2\") " pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.192961 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ft5s\" (UniqueName: \"kubernetes.io/projected/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-kube-api-access-7ft5s\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.202256 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/366c188c-7e0f-4ac6-8fa6-7a466714d0ea-ovn-controller-tls-certs\") pod \"ovn-controller-s4g2z\" (UID: \"366c188c-7e0f-4ac6-8fa6-7a466714d0ea\") " pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.274482 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s4g2z" Jan 26 15:04:35 crc kubenswrapper[4823]: I0126 15:04:35.294266 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:04:36 crc kubenswrapper[4823]: I0126 15:04:36.982324 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 15:04:36 crc kubenswrapper[4823]: I0126 15:04:36.984529 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:36 crc kubenswrapper[4823]: I0126 15:04:36.989928 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 26 15:04:36 crc kubenswrapper[4823]: I0126 15:04:36.990039 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 26 15:04:36 crc kubenswrapper[4823]: I0126 15:04:36.990184 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 26 15:04:36 crc kubenswrapper[4823]: I0126 15:04:36.990406 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 26 15:04:36 crc kubenswrapper[4823]: I0126 15:04:36.990855 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-fn5zb" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.012544 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.111025 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.111130 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.111215 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hljz8\" (UniqueName: \"kubernetes.io/projected/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-kube-api-access-hljz8\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.111283 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.111320 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.111439 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-config\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.111489 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.111531 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.213725 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.213787 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hljz8\" (UniqueName: \"kubernetes.io/projected/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-kube-api-access-hljz8\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.213834 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.213860 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.213881 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-config\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.213925 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.213966 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.214016 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.214273 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.214509 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.215547 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.215747 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-config\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.222092 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.222217 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.224189 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.236463 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.241953 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hljz8\" (UniqueName: \"kubernetes.io/projected/a7f3574f-bf6a-45bc-9b87-e519b18bf3dd-kube-api-access-hljz8\") pod \"ovsdbserver-nb-0\" (UID: \"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd\") " pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:37 crc kubenswrapper[4823]: I0126 15:04:37.330970 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.020162 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.021897 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.025538 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.025858 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-zpbg4" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.025903 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.029648 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.040421 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.131336 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.131438 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8fedd21-5444-4125-ac93-dedfe64abef7-config\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.131528 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fedd21-5444-4125-ac93-dedfe64abef7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.131556 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fedd21-5444-4125-ac93-dedfe64abef7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.131588 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fedd21-5444-4125-ac93-dedfe64abef7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.131801 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tk762\" (UniqueName: \"kubernetes.io/projected/d8fedd21-5444-4125-ac93-dedfe64abef7-kube-api-access-tk762\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.131948 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8fedd21-5444-4125-ac93-dedfe64abef7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.132093 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8fedd21-5444-4125-ac93-dedfe64abef7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.233800 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.233854 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8fedd21-5444-4125-ac93-dedfe64abef7-config\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.233899 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fedd21-5444-4125-ac93-dedfe64abef7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.233932 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fedd21-5444-4125-ac93-dedfe64abef7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.233959 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fedd21-5444-4125-ac93-dedfe64abef7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.233994 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tk762\" (UniqueName: \"kubernetes.io/projected/d8fedd21-5444-4125-ac93-dedfe64abef7-kube-api-access-tk762\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.234021 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8fedd21-5444-4125-ac93-dedfe64abef7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.234058 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8fedd21-5444-4125-ac93-dedfe64abef7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.234777 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.235707 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d8fedd21-5444-4125-ac93-dedfe64abef7-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.236433 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d8fedd21-5444-4125-ac93-dedfe64abef7-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.237078 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8fedd21-5444-4125-ac93-dedfe64abef7-config\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.238730 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fedd21-5444-4125-ac93-dedfe64abef7-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.239000 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d8fedd21-5444-4125-ac93-dedfe64abef7-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.240940 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8fedd21-5444-4125-ac93-dedfe64abef7-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.264289 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.275919 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tk762\" (UniqueName: \"kubernetes.io/projected/d8fedd21-5444-4125-ac93-dedfe64abef7-kube-api-access-tk762\") pod \"ovsdbserver-sb-0\" (UID: \"d8fedd21-5444-4125-ac93-dedfe64abef7\") " pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:38 crc kubenswrapper[4823]: I0126 15:04:38.345158 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 26 15:04:55 crc kubenswrapper[4823]: E0126 15:04:55.182911 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 15:04:55 crc kubenswrapper[4823]: E0126 15:04:55.184211 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbngj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(c43c52fb-3ef3-4d3e-984d-642a9bc09469): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:04:55 crc kubenswrapper[4823]: E0126 15:04:55.185486 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="c43c52fb-3ef3-4d3e-984d-642a9bc09469" Jan 26 15:04:55 crc kubenswrapper[4823]: E0126 15:04:55.474861 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="c43c52fb-3ef3-4d3e-984d-642a9bc09469" Jan 26 15:04:55 crc kubenswrapper[4823]: E0126 15:04:55.542776 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 26 15:04:55 crc kubenswrapper[4823]: E0126 15:04:55.543061 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z85zf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(a82c17e1-38ac-4448-b3ff-b18df77c521b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:04:55 crc kubenswrapper[4823]: E0126 15:04:55.545109 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a82c17e1-38ac-4448-b3ff-b18df77c521b" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.203045 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.204111 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g7vhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-dkxkd_openstack(84dc7a24-9dd4-4a59-85f2-0283786628d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.205345 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" podUID="84dc7a24-9dd4-4a59-85f2-0283786628d4" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.283528 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.283738 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2c5h5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-krn8r_openstack(9a001b80-e9c3-4de1-9a2a-d368adac1975): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.285111 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" podUID="9a001b80-e9c3-4de1-9a2a-d368adac1975" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.351636 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.351892 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sjzrk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-5hjqg_openstack(810199cf-7934-4e66-91b7-293883b42f7b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.353064 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" podUID="810199cf-7934-4e66-91b7-293883b42f7b" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.425210 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.425473 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f8clc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-zfc4r_openstack(d597d5f2-3e6d-4f78-bca6-d4b11cc244fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.426829 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" podUID="d597d5f2-3e6d-4f78-bca6-d4b11cc244fe" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.488947 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" podUID="810199cf-7934-4e66-91b7-293883b42f7b" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.489316 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="a82c17e1-38ac-4448-b3ff-b18df77c521b" Jan 26 15:04:56 crc kubenswrapper[4823]: E0126 15:04:56.489403 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" podUID="9a001b80-e9c3-4de1-9a2a-d368adac1975" Jan 26 15:04:56 crc kubenswrapper[4823]: I0126 15:04:56.502731 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 26 15:04:56 crc kubenswrapper[4823]: I0126 15:04:56.583004 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 26 15:04:56 crc kubenswrapper[4823]: I0126 15:04:56.604309 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:04:56 crc kubenswrapper[4823]: I0126 15:04:56.617896 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 26 15:04:56 crc kubenswrapper[4823]: W0126 15:04:56.675873 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29094f76_d918_4ee5_8064_52c459a4bdce.slice/crio-777b8e68916f2db0344847786118731645924a79873482e2acc571bea3d0400b WatchSource:0}: Error finding container 777b8e68916f2db0344847786118731645924a79873482e2acc571bea3d0400b: Status 404 returned error can't find the container with id 777b8e68916f2db0344847786118731645924a79873482e2acc571bea3d0400b Jan 26 15:04:56 crc kubenswrapper[4823]: I0126 15:04:56.764605 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-twc9z"] Jan 26 15:04:56 crc kubenswrapper[4823]: I0126 15:04:56.815188 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s4g2z"] Jan 26 15:04:56 crc kubenswrapper[4823]: I0126 15:04:56.924342 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-bfnxd"] Jan 26 15:04:56 crc kubenswrapper[4823]: I0126 15:04:56.927868 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:56 crc kubenswrapper[4823]: I0126 15:04:56.930216 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 26 15:04:56 crc kubenswrapper[4823]: I0126 15:04:56.935809 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bfnxd"] Jan 26 15:04:56 crc kubenswrapper[4823]: I0126 15:04:56.939576 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.014384 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f351fa81-8bb3-4b68-9971-f0e5015c60f3-ovn-rundir\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.014453 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f351fa81-8bb3-4b68-9971-f0e5015c60f3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.014515 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt44c\" (UniqueName: \"kubernetes.io/projected/f351fa81-8bb3-4b68-9971-f0e5015c60f3-kube-api-access-wt44c\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.014590 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f351fa81-8bb3-4b68-9971-f0e5015c60f3-combined-ca-bundle\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.014631 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f351fa81-8bb3-4b68-9971-f0e5015c60f3-ovs-rundir\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.014657 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f351fa81-8bb3-4b68-9971-f0e5015c60f3-config\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.070182 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.115710 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8clc\" (UniqueName: \"kubernetes.io/projected/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-kube-api-access-f8clc\") pod \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\" (UID: \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\") " Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.115968 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-config\") pod \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\" (UID: \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\") " Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.116098 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-dns-svc\") pod \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\" (UID: \"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe\") " Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.116356 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f351fa81-8bb3-4b68-9971-f0e5015c60f3-combined-ca-bundle\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.116416 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f351fa81-8bb3-4b68-9971-f0e5015c60f3-ovs-rundir\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.116478 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f351fa81-8bb3-4b68-9971-f0e5015c60f3-config\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.116572 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f351fa81-8bb3-4b68-9971-f0e5015c60f3-ovn-rundir\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.116592 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f351fa81-8bb3-4b68-9971-f0e5015c60f3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.116624 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt44c\" (UniqueName: \"kubernetes.io/projected/f351fa81-8bb3-4b68-9971-f0e5015c60f3-kube-api-access-wt44c\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.117480 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d597d5f2-3e6d-4f78-bca6-d4b11cc244fe" (UID: "d597d5f2-3e6d-4f78-bca6-d4b11cc244fe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.117633 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-config" (OuterVolumeSpecName: "config") pod "d597d5f2-3e6d-4f78-bca6-d4b11cc244fe" (UID: "d597d5f2-3e6d-4f78-bca6-d4b11cc244fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.118467 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f351fa81-8bb3-4b68-9971-f0e5015c60f3-ovn-rundir\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.118542 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f351fa81-8bb3-4b68-9971-f0e5015c60f3-ovs-rundir\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.119942 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f351fa81-8bb3-4b68-9971-f0e5015c60f3-config\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.126055 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f351fa81-8bb3-4b68-9971-f0e5015c60f3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.128277 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f351fa81-8bb3-4b68-9971-f0e5015c60f3-combined-ca-bundle\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.129352 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-kube-api-access-f8clc" (OuterVolumeSpecName: "kube-api-access-f8clc") pod "d597d5f2-3e6d-4f78-bca6-d4b11cc244fe" (UID: "d597d5f2-3e6d-4f78-bca6-d4b11cc244fe"). InnerVolumeSpecName "kube-api-access-f8clc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.153169 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt44c\" (UniqueName: \"kubernetes.io/projected/f351fa81-8bb3-4b68-9971-f0e5015c60f3-kube-api-access-wt44c\") pod \"ovn-controller-metrics-bfnxd\" (UID: \"f351fa81-8bb3-4b68-9971-f0e5015c60f3\") " pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.182059 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.190933 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5hjqg"] Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.217505 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7vhm\" (UniqueName: \"kubernetes.io/projected/84dc7a24-9dd4-4a59-85f2-0283786628d4-kube-api-access-g7vhm\") pod \"84dc7a24-9dd4-4a59-85f2-0283786628d4\" (UID: \"84dc7a24-9dd4-4a59-85f2-0283786628d4\") " Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.217752 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84dc7a24-9dd4-4a59-85f2-0283786628d4-config\") pod \"84dc7a24-9dd4-4a59-85f2-0283786628d4\" (UID: \"84dc7a24-9dd4-4a59-85f2-0283786628d4\") " Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.218181 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.218199 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.218212 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8clc\" (UniqueName: \"kubernetes.io/projected/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe-kube-api-access-f8clc\") on node \"crc\" DevicePath \"\"" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.221909 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84dc7a24-9dd4-4a59-85f2-0283786628d4-config" (OuterVolumeSpecName: "config") pod "84dc7a24-9dd4-4a59-85f2-0283786628d4" (UID: "84dc7a24-9dd4-4a59-85f2-0283786628d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.229516 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84dc7a24-9dd4-4a59-85f2-0283786628d4-kube-api-access-g7vhm" (OuterVolumeSpecName: "kube-api-access-g7vhm") pod "84dc7a24-9dd4-4a59-85f2-0283786628d4" (UID: "84dc7a24-9dd4-4a59-85f2-0283786628d4"). InnerVolumeSpecName "kube-api-access-g7vhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.248317 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-h7n5b"] Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.249969 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.253583 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.268083 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-h7n5b"] Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.274781 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-bfnxd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.322958 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-h7n5b\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.323047 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-h7n5b\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.323212 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-config\") pod \"dnsmasq-dns-6bc7876d45-h7n5b\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.323242 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plkrv\" (UniqueName: \"kubernetes.io/projected/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-kube-api-access-plkrv\") pod \"dnsmasq-dns-6bc7876d45-h7n5b\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.323343 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84dc7a24-9dd4-4a59-85f2-0283786628d4-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.323386 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7vhm\" (UniqueName: \"kubernetes.io/projected/84dc7a24-9dd4-4a59-85f2-0283786628d4-kube-api-access-g7vhm\") on node \"crc\" DevicePath \"\"" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.424702 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-config\") pod \"dnsmasq-dns-6bc7876d45-h7n5b\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.425252 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plkrv\" (UniqueName: \"kubernetes.io/projected/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-kube-api-access-plkrv\") pod \"dnsmasq-dns-6bc7876d45-h7n5b\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.425302 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-h7n5b\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.425345 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-h7n5b\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.426495 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-h7n5b\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.427206 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-config\") pod \"dnsmasq-dns-6bc7876d45-h7n5b\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.429269 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-h7n5b\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.449871 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plkrv\" (UniqueName: \"kubernetes.io/projected/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-kube-api-access-plkrv\") pod \"dnsmasq-dns-6bc7876d45-h7n5b\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.489413 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1cbd37a3-3241-4e20-9d2b-c73873212cb1","Type":"ContainerStarted","Data":"75df7ccb020536878d69c1206a827e5a46cdbee88753d97beede8c25c32ba33f"} Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.491341 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7dd872d0-a323-4968-9e53-37fefc8adc23","Type":"ContainerStarted","Data":"e4be0f6ab10cebdff732790f6ff6a031fd89b218ab4e6554aead619dcf6dd2e9"} Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.492605 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d8fedd21-5444-4125-ac93-dedfe64abef7","Type":"ContainerStarted","Data":"46436c60f56db90ebca5749f4f1f4ac5af07c6b2826db54d35fe1b517ccecb03"} Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.493481 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" event={"ID":"d597d5f2-3e6d-4f78-bca6-d4b11cc244fe","Type":"ContainerDied","Data":"9c1064988b25b0d0d5358d2f6bdb1524bb82413476396b20b7b14d81a1b5b830"} Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.493604 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-zfc4r" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.500403 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"29094f76-d918-4ee5-8064-52c459a4bdce","Type":"ContainerStarted","Data":"777b8e68916f2db0344847786118731645924a79873482e2acc571bea3d0400b"} Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.502677 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.502688 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-dkxkd" event={"ID":"84dc7a24-9dd4-4a59-85f2-0283786628d4","Type":"ContainerDied","Data":"60f24c82c1fc1b3013aa502b0435de26df6daaf1d65fd4e802b58b07ae94e028"} Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.504258 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s4g2z" event={"ID":"366c188c-7e0f-4ac6-8fa6-7a466714d0ea","Type":"ContainerStarted","Data":"c72e40511821bf7a2fca9e633abacdee1c95184d9d9e88843fcf257b564cf311"} Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.505124 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"19474244-0d03-4e7f-8a6d-abd64aafaff9","Type":"ContainerStarted","Data":"902940ade0e4f7212b0380281e6e2742ec87cc5c503ddb3e658e20c5d0439150"} Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.512574 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-twc9z" event={"ID":"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2","Type":"ContainerStarted","Data":"6f11ac1724467d34c92627a881c987b97ec4ef22ad8bb82ceb6a7d75026b0774"} Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.636072 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.686228 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dkxkd"] Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.712875 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-dkxkd"] Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.743421 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zfc4r"] Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.751642 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-zfc4r"] Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.792655 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-bfnxd"] Jan 26 15:04:57 crc kubenswrapper[4823]: W0126 15:04:57.806995 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf351fa81_8bb3_4b68_9971_f0e5015c60f3.slice/crio-92ade00c697ea7d665e4377587b7ddf9137f91e17ee00690e615038a16456950 WatchSource:0}: Error finding container 92ade00c697ea7d665e4377587b7ddf9137f91e17ee00690e615038a16456950: Status 404 returned error can't find the container with id 92ade00c697ea7d665e4377587b7ddf9137f91e17ee00690e615038a16456950 Jan 26 15:04:57 crc kubenswrapper[4823]: I0126 15:04:57.898871 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.135149 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.151744 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/810199cf-7934-4e66-91b7-293883b42f7b-dns-svc\") pod \"810199cf-7934-4e66-91b7-293883b42f7b\" (UID: \"810199cf-7934-4e66-91b7-293883b42f7b\") " Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.151969 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/810199cf-7934-4e66-91b7-293883b42f7b-config\") pod \"810199cf-7934-4e66-91b7-293883b42f7b\" (UID: \"810199cf-7934-4e66-91b7-293883b42f7b\") " Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.152127 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjzrk\" (UniqueName: \"kubernetes.io/projected/810199cf-7934-4e66-91b7-293883b42f7b-kube-api-access-sjzrk\") pod \"810199cf-7934-4e66-91b7-293883b42f7b\" (UID: \"810199cf-7934-4e66-91b7-293883b42f7b\") " Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.155134 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810199cf-7934-4e66-91b7-293883b42f7b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "810199cf-7934-4e66-91b7-293883b42f7b" (UID: "810199cf-7934-4e66-91b7-293883b42f7b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.155539 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/810199cf-7934-4e66-91b7-293883b42f7b-config" (OuterVolumeSpecName: "config") pod "810199cf-7934-4e66-91b7-293883b42f7b" (UID: "810199cf-7934-4e66-91b7-293883b42f7b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.188707 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/810199cf-7934-4e66-91b7-293883b42f7b-kube-api-access-sjzrk" (OuterVolumeSpecName: "kube-api-access-sjzrk") pod "810199cf-7934-4e66-91b7-293883b42f7b" (UID: "810199cf-7934-4e66-91b7-293883b42f7b"). InnerVolumeSpecName "kube-api-access-sjzrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.255649 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/810199cf-7934-4e66-91b7-293883b42f7b-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.255695 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjzrk\" (UniqueName: \"kubernetes.io/projected/810199cf-7934-4e66-91b7-293883b42f7b-kube-api-access-sjzrk\") on node \"crc\" DevicePath \"\"" Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.255707 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/810199cf-7934-4e66-91b7-293883b42f7b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.348739 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-h7n5b"] Jan 26 15:04:58 crc kubenswrapper[4823]: W0126 15:04:58.436375 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6586c9b3_8e2e_4a3c_9bc9_cd71e7f57cc6.slice/crio-3ce2481e03eae3e1ecba7cd41931ee132297465093abcfaaa9731736bfa1490f WatchSource:0}: Error finding container 3ce2481e03eae3e1ecba7cd41931ee132297465093abcfaaa9731736bfa1490f: Status 404 returned error can't find the container with id 3ce2481e03eae3e1ecba7cd41931ee132297465093abcfaaa9731736bfa1490f Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.528346 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd","Type":"ContainerStarted","Data":"0df1589a197a191752545d42fa95c325c6763367691f175f6873b860d8d792df"} Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.530338 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" event={"ID":"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6","Type":"ContainerStarted","Data":"3ce2481e03eae3e1ecba7cd41931ee132297465093abcfaaa9731736bfa1490f"} Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.535285 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" event={"ID":"810199cf-7934-4e66-91b7-293883b42f7b","Type":"ContainerDied","Data":"afde8a3bf63341fff2a6d93f38199443b5486b3306a339415bc0fe0b5976410f"} Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.535338 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-5hjqg" Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.537582 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bfnxd" event={"ID":"f351fa81-8bb3-4b68-9971-f0e5015c60f3","Type":"ContainerStarted","Data":"92ade00c697ea7d665e4377587b7ddf9137f91e17ee00690e615038a16456950"} Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.617756 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5hjqg"] Jan 26 15:04:58 crc kubenswrapper[4823]: I0126 15:04:58.631854 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-5hjqg"] Jan 26 15:04:59 crc kubenswrapper[4823]: I0126 15:04:59.578184 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="810199cf-7934-4e66-91b7-293883b42f7b" path="/var/lib/kubelet/pods/810199cf-7934-4e66-91b7-293883b42f7b/volumes" Jan 26 15:04:59 crc kubenswrapper[4823]: I0126 15:04:59.578702 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84dc7a24-9dd4-4a59-85f2-0283786628d4" path="/var/lib/kubelet/pods/84dc7a24-9dd4-4a59-85f2-0283786628d4/volumes" Jan 26 15:04:59 crc kubenswrapper[4823]: I0126 15:04:59.579172 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d597d5f2-3e6d-4f78-bca6-d4b11cc244fe" path="/var/lib/kubelet/pods/d597d5f2-3e6d-4f78-bca6-d4b11cc244fe/volumes" Jan 26 15:05:11 crc kubenswrapper[4823]: E0126 15:05:11.891082 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Jan 26 15:05:11 crc kubenswrapper[4823]: E0126 15:05:11.892256 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:ndfhfh5f4h8dh5h679h665hb6h55ch5bch695h56bh547h69h74h594h68h5c8h6fh688h5cfh6ch569h54dh9dhc6hd7h67bh54fh664h7dh5b7q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wt44c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-bfnxd_openstack(f351fa81-8bb3-4b68-9971-f0e5015c60f3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:05:11 crc kubenswrapper[4823]: E0126 15:05:11.894241 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-metrics-bfnxd" podUID="f351fa81-8bb3-4b68-9971-f0e5015c60f3" Jan 26 15:05:12 crc kubenswrapper[4823]: E0126 15:05:12.519511 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 15:05:12 crc kubenswrapper[4823]: E0126 15:05:12.520087 4823 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 26 15:05:12 crc kubenswrapper[4823]: E0126 15:05:12.520303 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lpcpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(19474244-0d03-4e7f-8a6d-abd64aafaff9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:05:12 crc kubenswrapper[4823]: E0126 15:05:12.521495 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="19474244-0d03-4e7f-8a6d-abd64aafaff9" Jan 26 15:05:12 crc kubenswrapper[4823]: E0126 15:05:12.685946 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="19474244-0d03-4e7f-8a6d-abd64aafaff9" Jan 26 15:05:12 crc kubenswrapper[4823]: E0126 15:05:12.686697 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovn-controller-metrics-bfnxd" podUID="f351fa81-8bb3-4b68-9971-f0e5015c60f3" Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.675051 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7dd872d0-a323-4968-9e53-37fefc8adc23","Type":"ContainerStarted","Data":"42d9de8c97ae4dd67f066da40edd06d8912760767c01d14bd2f716c439744205"} Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.679978 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"29094f76-d918-4ee5-8064-52c459a4bdce","Type":"ContainerStarted","Data":"1ec32e8b9ec071d1f11188723342a4584a55ac6501161cfbf8b135f436f84e7a"} Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.689994 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s4g2z" event={"ID":"366c188c-7e0f-4ac6-8fa6-7a466714d0ea","Type":"ContainerStarted","Data":"6949ec363942c360783789ac425cd49e063cff22041d634ceedebeae30b2ed6c"} Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.690185 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-s4g2z" Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.692648 4823 generic.go:334] "Generic (PLEG): container finished" podID="2a39ae8b-f50c-492b-9d4c-308b9b4c87d2" containerID="a19995c0a8d690b7a740fd15d8cb465e1af46069892c61e4f6ab95672fdc18f7" exitCode=0 Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.692737 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-twc9z" event={"ID":"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2","Type":"ContainerDied","Data":"a19995c0a8d690b7a740fd15d8cb465e1af46069892c61e4f6ab95672fdc18f7"} Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.698305 4823 generic.go:334] "Generic (PLEG): container finished" podID="9a001b80-e9c3-4de1-9a2a-d368adac1975" containerID="e31f4408c21b23bb3d74d27142e1f888fe1d3402e048417622cc0d2d8c88e547" exitCode=0 Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.698444 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" event={"ID":"9a001b80-e9c3-4de1-9a2a-d368adac1975","Type":"ContainerDied","Data":"e31f4408c21b23bb3d74d27142e1f888fe1d3402e048417622cc0d2d8c88e547"} Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.702720 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d8fedd21-5444-4125-ac93-dedfe64abef7","Type":"ContainerStarted","Data":"efc32c0a509c49b1ce9be695488099d09ce0414b3c26f241e0d503d388ed9e50"} Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.711533 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd","Type":"ContainerStarted","Data":"8dda28d888f09110892ff42b3c7828bf7ff33539034a350221dcfb7e668fd34a"} Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.719827 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1cbd37a3-3241-4e20-9d2b-c73873212cb1","Type":"ContainerStarted","Data":"10202a676fc474b4fd6e2cc15169ebe4523582f8000dfb55a0569e239279ba2f"} Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.720072 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.729698 4823 generic.go:334] "Generic (PLEG): container finished" podID="6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" containerID="21754a67f5371007f59875a0e599a1170420bd2e4fafa4b77e7316635203121e" exitCode=0 Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.729773 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" event={"ID":"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6","Type":"ContainerDied","Data":"21754a67f5371007f59875a0e599a1170420bd2e4fafa4b77e7316635203121e"} Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.816544 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-s4g2z" podStartSLOduration=24.143854478 podStartE2EDuration="39.816517194s" podCreationTimestamp="2026-01-26 15:04:34 +0000 UTC" firstStartedPulling="2026-01-26 15:04:56.819121468 +0000 UTC m=+1093.504584573" lastFinishedPulling="2026-01-26 15:05:12.491784184 +0000 UTC m=+1109.177247289" observedRunningTime="2026-01-26 15:05:13.814269483 +0000 UTC m=+1110.499732638" watchObservedRunningTime="2026-01-26 15:05:13.816517194 +0000 UTC m=+1110.501980309" Jan 26 15:05:13 crc kubenswrapper[4823]: I0126 15:05:13.895700 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=35.771671649 podStartE2EDuration="45.895672025s" podCreationTimestamp="2026-01-26 15:04:28 +0000 UTC" firstStartedPulling="2026-01-26 15:04:56.583977762 +0000 UTC m=+1093.269440867" lastFinishedPulling="2026-01-26 15:05:06.707978138 +0000 UTC m=+1103.393441243" observedRunningTime="2026-01-26 15:05:13.859719273 +0000 UTC m=+1110.545182388" watchObservedRunningTime="2026-01-26 15:05:13.895672025 +0000 UTC m=+1110.581135130" Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.743741 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-twc9z" event={"ID":"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2","Type":"ContainerStarted","Data":"0c41d5773bcb390ed54a20af933df5a2cfe71c31475b8410eef390371247e7f4"} Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.745575 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-twc9z" event={"ID":"2a39ae8b-f50c-492b-9d4c-308b9b4c87d2","Type":"ContainerStarted","Data":"f7f52c17a93523df1e56e829d3385c3af43343d26ba2c5ab6daf4dfe2c2094f4"} Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.745620 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.747872 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" event={"ID":"9a001b80-e9c3-4de1-9a2a-d368adac1975","Type":"ContainerStarted","Data":"4f16df8a569a32469f934fefe3535d54d6589c899c826fd11b44ad4046830ede"} Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.748106 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.750886 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"d8fedd21-5444-4125-ac93-dedfe64abef7","Type":"ContainerStarted","Data":"d12a43b4bb31522a6e3ddc382b346e5ea8275083cc182eb52b60e55aa6d01e3f"} Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.753310 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a7f3574f-bf6a-45bc-9b87-e519b18bf3dd","Type":"ContainerStarted","Data":"6407af0a55b49b7ab073318405050a2517f3a153136ca66e144649d1ec93a19a"} Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.755405 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a82c17e1-38ac-4448-b3ff-b18df77c521b","Type":"ContainerStarted","Data":"4b2034ce41d61eb076d22a82c04cc9cf553fcbec011d783b6bb86deedba49bf3"} Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.759407 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" event={"ID":"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6","Type":"ContainerStarted","Data":"6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598"} Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.760079 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.763081 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c43c52fb-3ef3-4d3e-984d-642a9bc09469","Type":"ContainerStarted","Data":"d324e4498e25c790364c529d8ff7c5a42be04ccc727f54417de05094a26b7b1f"} Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.808626 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-twc9z" podStartSLOduration=26.017253555 podStartE2EDuration="40.808601225s" podCreationTimestamp="2026-01-26 15:04:34 +0000 UTC" firstStartedPulling="2026-01-26 15:04:56.798633088 +0000 UTC m=+1093.484096193" lastFinishedPulling="2026-01-26 15:05:11.589980758 +0000 UTC m=+1108.275443863" observedRunningTime="2026-01-26 15:05:14.771274076 +0000 UTC m=+1111.456737191" watchObservedRunningTime="2026-01-26 15:05:14.808601225 +0000 UTC m=+1111.494064330" Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.857647 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=23.669929851 podStartE2EDuration="39.857620273s" podCreationTimestamp="2026-01-26 15:04:35 +0000 UTC" firstStartedPulling="2026-01-26 15:04:57.933408907 +0000 UTC m=+1094.618872022" lastFinishedPulling="2026-01-26 15:05:14.121099339 +0000 UTC m=+1110.806562444" observedRunningTime="2026-01-26 15:05:14.852410401 +0000 UTC m=+1111.537873546" watchObservedRunningTime="2026-01-26 15:05:14.857620273 +0000 UTC m=+1111.543083378" Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.893860 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" podStartSLOduration=-9223371985.960945 podStartE2EDuration="50.893831112s" podCreationTimestamp="2026-01-26 15:04:24 +0000 UTC" firstStartedPulling="2026-01-26 15:04:25.161020196 +0000 UTC m=+1061.846483301" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:05:14.876323803 +0000 UTC m=+1111.561786908" watchObservedRunningTime="2026-01-26 15:05:14.893831112 +0000 UTC m=+1111.579294217" Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.908008 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" podStartSLOduration=4.485724011 podStartE2EDuration="17.907986598s" podCreationTimestamp="2026-01-26 15:04:57 +0000 UTC" firstStartedPulling="2026-01-26 15:04:58.440712103 +0000 UTC m=+1095.126175208" lastFinishedPulling="2026-01-26 15:05:11.86297469 +0000 UTC m=+1108.548437795" observedRunningTime="2026-01-26 15:05:14.904864142 +0000 UTC m=+1111.590327257" watchObservedRunningTime="2026-01-26 15:05:14.907986598 +0000 UTC m=+1111.593449703" Jan 26 15:05:14 crc kubenswrapper[4823]: I0126 15:05:14.932893 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=21.727234852 podStartE2EDuration="38.932770404s" podCreationTimestamp="2026-01-26 15:04:36 +0000 UTC" firstStartedPulling="2026-01-26 15:04:56.972812589 +0000 UTC m=+1093.658275694" lastFinishedPulling="2026-01-26 15:05:14.178348131 +0000 UTC m=+1110.863811246" observedRunningTime="2026-01-26 15:05:14.928398925 +0000 UTC m=+1111.613862030" watchObservedRunningTime="2026-01-26 15:05:14.932770404 +0000 UTC m=+1111.618233519" Jan 26 15:05:15 crc kubenswrapper[4823]: I0126 15:05:15.295534 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:05:16 crc kubenswrapper[4823]: I0126 15:05:16.332725 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 26 15:05:16 crc kubenswrapper[4823]: I0126 15:05:16.379208 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 26 15:05:16 crc kubenswrapper[4823]: I0126 15:05:16.779414 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 26 15:05:17 crc kubenswrapper[4823]: I0126 15:05:17.345781 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 26 15:05:17 crc kubenswrapper[4823]: I0126 15:05:17.413113 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 26 15:05:17 crc kubenswrapper[4823]: I0126 15:05:17.789906 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 26 15:05:18 crc kubenswrapper[4823]: I0126 15:05:18.398506 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 26 15:05:18 crc kubenswrapper[4823]: I0126 15:05:18.948652 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 26 15:05:19 crc kubenswrapper[4823]: I0126 15:05:19.514641 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:05:19 crc kubenswrapper[4823]: I0126 15:05:19.816769 4823 generic.go:334] "Generic (PLEG): container finished" podID="29094f76-d918-4ee5-8064-52c459a4bdce" containerID="1ec32e8b9ec071d1f11188723342a4584a55ac6501161cfbf8b135f436f84e7a" exitCode=0 Jan 26 15:05:19 crc kubenswrapper[4823]: I0126 15:05:19.816876 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"29094f76-d918-4ee5-8064-52c459a4bdce","Type":"ContainerDied","Data":"1ec32e8b9ec071d1f11188723342a4584a55ac6501161cfbf8b135f436f84e7a"} Jan 26 15:05:19 crc kubenswrapper[4823]: I0126 15:05:19.822096 4823 generic.go:334] "Generic (PLEG): container finished" podID="7dd872d0-a323-4968-9e53-37fefc8adc23" containerID="42d9de8c97ae4dd67f066da40edd06d8912760767c01d14bd2f716c439744205" exitCode=0 Jan 26 15:05:19 crc kubenswrapper[4823]: I0126 15:05:19.822232 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7dd872d0-a323-4968-9e53-37fefc8adc23","Type":"ContainerDied","Data":"42d9de8c97ae4dd67f066da40edd06d8912760767c01d14bd2f716c439744205"} Jan 26 15:05:20 crc kubenswrapper[4823]: I0126 15:05:20.834688 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7dd872d0-a323-4968-9e53-37fefc8adc23","Type":"ContainerStarted","Data":"da037abcaf100ed893f231b4e0ac37d4a8015732db31771f2e59689af49b4055"} Jan 26 15:05:20 crc kubenswrapper[4823]: I0126 15:05:20.837230 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"29094f76-d918-4ee5-8064-52c459a4bdce","Type":"ContainerStarted","Data":"b511a25a57a84bbf22f3ed782b13d91341ae0a57ca78c66ea13d6e4e2f937c8b"} Jan 26 15:05:20 crc kubenswrapper[4823]: I0126 15:05:20.864329 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=38.03661081 podStartE2EDuration="53.864305824s" podCreationTimestamp="2026-01-26 15:04:27 +0000 UTC" firstStartedPulling="2026-01-26 15:04:56.660299387 +0000 UTC m=+1093.345762492" lastFinishedPulling="2026-01-26 15:05:12.487994401 +0000 UTC m=+1109.173457506" observedRunningTime="2026-01-26 15:05:20.856065359 +0000 UTC m=+1117.541528464" watchObservedRunningTime="2026-01-26 15:05:20.864305824 +0000 UTC m=+1117.549768919" Jan 26 15:05:20 crc kubenswrapper[4823]: I0126 15:05:20.886909 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=40.06220731 podStartE2EDuration="55.88688034s" podCreationTimestamp="2026-01-26 15:04:25 +0000 UTC" firstStartedPulling="2026-01-26 15:04:56.692161378 +0000 UTC m=+1093.377624483" lastFinishedPulling="2026-01-26 15:05:12.516834408 +0000 UTC m=+1109.202297513" observedRunningTime="2026-01-26 15:05:20.881181815 +0000 UTC m=+1117.566644960" watchObservedRunningTime="2026-01-26 15:05:20.88688034 +0000 UTC m=+1117.572343455" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.392206 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.638665 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.715060 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krn8r"] Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.730143 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" podUID="9a001b80-e9c3-4de1-9a2a-d368adac1975" containerName="dnsmasq-dns" containerID="cri-o://4f16df8a569a32469f934fefe3535d54d6589c899c826fd11b44ad4046830ede" gracePeriod=10 Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.747624 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.760787 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.788401 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.789003 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-pxftb" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.793768 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.789716 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.789756 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.892985 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/103958d3-5a75-408a-bcc3-02788016b72e-config\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.893035 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/103958d3-5a75-408a-bcc3-02788016b72e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.893059 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103958d3-5a75-408a-bcc3-02788016b72e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.893082 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/103958d3-5a75-408a-bcc3-02788016b72e-scripts\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.893103 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/103958d3-5a75-408a-bcc3-02788016b72e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.893129 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/103958d3-5a75-408a-bcc3-02788016b72e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.893152 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8xlc\" (UniqueName: \"kubernetes.io/projected/103958d3-5a75-408a-bcc3-02788016b72e-kube-api-access-g8xlc\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.901221 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-hv5xq"] Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.921015 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.925507 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hv5xq"] Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.925930 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.994598 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/103958d3-5a75-408a-bcc3-02788016b72e-scripts\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.995229 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/103958d3-5a75-408a-bcc3-02788016b72e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.995282 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/103958d3-5a75-408a-bcc3-02788016b72e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.995308 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8xlc\" (UniqueName: \"kubernetes.io/projected/103958d3-5a75-408a-bcc3-02788016b72e-kube-api-access-g8xlc\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.995430 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/103958d3-5a75-408a-bcc3-02788016b72e-config\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.995456 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/103958d3-5a75-408a-bcc3-02788016b72e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.995473 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103958d3-5a75-408a-bcc3-02788016b72e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.997057 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/103958d3-5a75-408a-bcc3-02788016b72e-scripts\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:22 crc kubenswrapper[4823]: I0126 15:05:22.998742 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/103958d3-5a75-408a-bcc3-02788016b72e-config\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.007870 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/103958d3-5a75-408a-bcc3-02788016b72e-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.018177 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/103958d3-5a75-408a-bcc3-02788016b72e-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.026818 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/103958d3-5a75-408a-bcc3-02788016b72e-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.026817 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103958d3-5a75-408a-bcc3-02788016b72e-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.040473 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8xlc\" (UniqueName: \"kubernetes.io/projected/103958d3-5a75-408a-bcc3-02788016b72e-kube-api-access-g8xlc\") pod \"ovn-northd-0\" (UID: \"103958d3-5a75-408a-bcc3-02788016b72e\") " pod="openstack/ovn-northd-0" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.101079 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2tff\" (UniqueName: \"kubernetes.io/projected/f90430a4-242c-43dd-9c41-11e67170985a-kube-api-access-p2tff\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.101164 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-dns-svc\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.101436 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.101465 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.101605 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-config\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.122910 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.204103 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.204155 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.204187 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-config\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.204269 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2tff\" (UniqueName: \"kubernetes.io/projected/f90430a4-242c-43dd-9c41-11e67170985a-kube-api-access-p2tff\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.204288 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-dns-svc\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.205497 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-dns-svc\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.205497 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.206201 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.207150 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-config\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.242399 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2tff\" (UniqueName: \"kubernetes.io/projected/f90430a4-242c-43dd-9c41-11e67170985a-kube-api-access-p2tff\") pod \"dnsmasq-dns-8554648995-hv5xq\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.273884 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.629018 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 26 15:05:23 crc kubenswrapper[4823]: W0126 15:05:23.633330 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod103958d3_5a75_408a_bcc3_02788016b72e.slice/crio-9dbc2eece85ecf492ea9eadd658d2f3a08f836905e9822da498325f4404277db WatchSource:0}: Error finding container 9dbc2eece85ecf492ea9eadd658d2f3a08f836905e9822da498325f4404277db: Status 404 returned error can't find the container with id 9dbc2eece85ecf492ea9eadd658d2f3a08f836905e9822da498325f4404277db Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.805250 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hv5xq"] Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.921670 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"103958d3-5a75-408a-bcc3-02788016b72e","Type":"ContainerStarted","Data":"9dbc2eece85ecf492ea9eadd658d2f3a08f836905e9822da498325f4404277db"} Jan 26 15:05:23 crc kubenswrapper[4823]: I0126 15:05:23.922875 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hv5xq" event={"ID":"f90430a4-242c-43dd-9c41-11e67170985a","Type":"ContainerStarted","Data":"8239b38be1064618b029a12aebb1831fb6089b8cee688bde25a995f1d8cdf49f"} Jan 26 15:05:24 crc kubenswrapper[4823]: I0126 15:05:24.514087 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" podUID="9a001b80-e9c3-4de1-9a2a-d368adac1975" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: connect: connection refused" Jan 26 15:05:24 crc kubenswrapper[4823]: I0126 15:05:24.949462 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-bfnxd" event={"ID":"f351fa81-8bb3-4b68-9971-f0e5015c60f3","Type":"ContainerStarted","Data":"af22a7931845ee2a0cb3463a1200675e25377e6bde3b020f6034cf4224cbb47f"} Jan 26 15:05:24 crc kubenswrapper[4823]: I0126 15:05:24.953016 4823 generic.go:334] "Generic (PLEG): container finished" podID="9a001b80-e9c3-4de1-9a2a-d368adac1975" containerID="4f16df8a569a32469f934fefe3535d54d6589c899c826fd11b44ad4046830ede" exitCode=0 Jan 26 15:05:24 crc kubenswrapper[4823]: I0126 15:05:24.953098 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" event={"ID":"9a001b80-e9c3-4de1-9a2a-d368adac1975","Type":"ContainerDied","Data":"4f16df8a569a32469f934fefe3535d54d6589c899c826fd11b44ad4046830ede"} Jan 26 15:05:24 crc kubenswrapper[4823]: I0126 15:05:24.954760 4823 generic.go:334] "Generic (PLEG): container finished" podID="f90430a4-242c-43dd-9c41-11e67170985a" containerID="33f4543b4a8e81fd49d7e7ef8853a8d8cb9d51e063b856739767dc24ef270b36" exitCode=0 Jan 26 15:05:24 crc kubenswrapper[4823]: I0126 15:05:24.954793 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hv5xq" event={"ID":"f90430a4-242c-43dd-9c41-11e67170985a","Type":"ContainerDied","Data":"33f4543b4a8e81fd49d7e7ef8853a8d8cb9d51e063b856739767dc24ef270b36"} Jan 26 15:05:24 crc kubenswrapper[4823]: I0126 15:05:24.984033 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-bfnxd" podStartSLOduration=-9223372007.870777 podStartE2EDuration="28.983998056s" podCreationTimestamp="2026-01-26 15:04:56 +0000 UTC" firstStartedPulling="2026-01-26 15:04:57.809275854 +0000 UTC m=+1094.494738959" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:05:24.969689156 +0000 UTC m=+1121.655152281" watchObservedRunningTime="2026-01-26 15:05:24.983998056 +0000 UTC m=+1121.669461161" Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.569651 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.665635 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a001b80-e9c3-4de1-9a2a-d368adac1975-dns-svc\") pod \"9a001b80-e9c3-4de1-9a2a-d368adac1975\" (UID: \"9a001b80-e9c3-4de1-9a2a-d368adac1975\") " Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.665725 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a001b80-e9c3-4de1-9a2a-d368adac1975-config\") pod \"9a001b80-e9c3-4de1-9a2a-d368adac1975\" (UID: \"9a001b80-e9c3-4de1-9a2a-d368adac1975\") " Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.665766 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c5h5\" (UniqueName: \"kubernetes.io/projected/9a001b80-e9c3-4de1-9a2a-d368adac1975-kube-api-access-2c5h5\") pod \"9a001b80-e9c3-4de1-9a2a-d368adac1975\" (UID: \"9a001b80-e9c3-4de1-9a2a-d368adac1975\") " Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.679673 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a001b80-e9c3-4de1-9a2a-d368adac1975-kube-api-access-2c5h5" (OuterVolumeSpecName: "kube-api-access-2c5h5") pod "9a001b80-e9c3-4de1-9a2a-d368adac1975" (UID: "9a001b80-e9c3-4de1-9a2a-d368adac1975"). InnerVolumeSpecName "kube-api-access-2c5h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.723131 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a001b80-e9c3-4de1-9a2a-d368adac1975-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9a001b80-e9c3-4de1-9a2a-d368adac1975" (UID: "9a001b80-e9c3-4de1-9a2a-d368adac1975"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.724569 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a001b80-e9c3-4de1-9a2a-d368adac1975-config" (OuterVolumeSpecName: "config") pod "9a001b80-e9c3-4de1-9a2a-d368adac1975" (UID: "9a001b80-e9c3-4de1-9a2a-d368adac1975"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.768591 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9a001b80-e9c3-4de1-9a2a-d368adac1975-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.768644 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a001b80-e9c3-4de1-9a2a-d368adac1975-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.768659 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c5h5\" (UniqueName: \"kubernetes.io/projected/9a001b80-e9c3-4de1-9a2a-d368adac1975-kube-api-access-2c5h5\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.964718 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"103958d3-5a75-408a-bcc3-02788016b72e","Type":"ContainerStarted","Data":"6164ca3011f05a28e7117035741ec4e502da546ddf4d0d573e6567bd5af03d59"} Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.964793 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"103958d3-5a75-408a-bcc3-02788016b72e","Type":"ContainerStarted","Data":"70ced75965e13dd99a11627564ec6e50c0fea73d48dfa09e614c275f22242bf4"} Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.964857 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.967144 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" event={"ID":"9a001b80-e9c3-4de1-9a2a-d368adac1975","Type":"ContainerDied","Data":"ccf4d9043d06f4e053b61635a8c9af66a104f14b9c3d986bae4c1957ca3aa6be"} Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.967198 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-krn8r" Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.967214 4823 scope.go:117] "RemoveContainer" containerID="4f16df8a569a32469f934fefe3535d54d6589c899c826fd11b44ad4046830ede" Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.968956 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hv5xq" event={"ID":"f90430a4-242c-43dd-9c41-11e67170985a","Type":"ContainerStarted","Data":"3fd40d56b0675043d40ef452c1e772849efead7fd3a6d7cdbf8fe9cb209af31c"} Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.969265 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:25 crc kubenswrapper[4823]: I0126 15:05:25.983932 4823 scope.go:117] "RemoveContainer" containerID="e31f4408c21b23bb3d74d27142e1f888fe1d3402e048417622cc0d2d8c88e547" Jan 26 15:05:26 crc kubenswrapper[4823]: I0126 15:05:26.001274 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.130880759 podStartE2EDuration="4.001250263s" podCreationTimestamp="2026-01-26 15:05:22 +0000 UTC" firstStartedPulling="2026-01-26 15:05:23.637837151 +0000 UTC m=+1120.323300256" lastFinishedPulling="2026-01-26 15:05:25.508206655 +0000 UTC m=+1122.193669760" observedRunningTime="2026-01-26 15:05:25.989991556 +0000 UTC m=+1122.675454671" watchObservedRunningTime="2026-01-26 15:05:26.001250263 +0000 UTC m=+1122.686713368" Jan 26 15:05:26 crc kubenswrapper[4823]: I0126 15:05:26.021866 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krn8r"] Jan 26 15:05:26 crc kubenswrapper[4823]: I0126 15:05:26.027725 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-krn8r"] Jan 26 15:05:26 crc kubenswrapper[4823]: I0126 15:05:26.047570 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-hv5xq" podStartSLOduration=4.047545567 podStartE2EDuration="4.047545567s" podCreationTimestamp="2026-01-26 15:05:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:05:26.042277654 +0000 UTC m=+1122.727740759" watchObservedRunningTime="2026-01-26 15:05:26.047545567 +0000 UTC m=+1122.733008672" Jan 26 15:05:27 crc kubenswrapper[4823]: I0126 15:05:27.232566 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 26 15:05:27 crc kubenswrapper[4823]: I0126 15:05:27.232650 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 26 15:05:27 crc kubenswrapper[4823]: I0126 15:05:27.572632 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a001b80-e9c3-4de1-9a2a-d368adac1975" path="/var/lib/kubelet/pods/9a001b80-e9c3-4de1-9a2a-d368adac1975/volumes" Jan 26 15:05:27 crc kubenswrapper[4823]: I0126 15:05:27.993422 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"19474244-0d03-4e7f-8a6d-abd64aafaff9","Type":"ContainerStarted","Data":"6e82e9deb99af2b3b870a2ed2a53db407735ae46e99c77c22fa41e3ca8b9f407"} Jan 26 15:05:27 crc kubenswrapper[4823]: I0126 15:05:27.993772 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 15:05:28 crc kubenswrapper[4823]: I0126 15:05:28.025481 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=27.707659357 podStartE2EDuration="58.025429937s" podCreationTimestamp="2026-01-26 15:04:30 +0000 UTC" firstStartedPulling="2026-01-26 15:04:56.692200369 +0000 UTC m=+1093.377663474" lastFinishedPulling="2026-01-26 15:05:27.009970929 +0000 UTC m=+1123.695434054" observedRunningTime="2026-01-26 15:05:28.012136013 +0000 UTC m=+1124.697599158" watchObservedRunningTime="2026-01-26 15:05:28.025429937 +0000 UTC m=+1124.710893082" Jan 26 15:05:28 crc kubenswrapper[4823]: I0126 15:05:28.538107 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 26 15:05:28 crc kubenswrapper[4823]: I0126 15:05:28.542171 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 26 15:05:28 crc kubenswrapper[4823]: I0126 15:05:28.631963 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 26 15:05:29 crc kubenswrapper[4823]: I0126 15:05:29.107096 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 26 15:05:29 crc kubenswrapper[4823]: I0126 15:05:29.663490 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 26 15:05:29 crc kubenswrapper[4823]: I0126 15:05:29.737312 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 26 15:05:33 crc kubenswrapper[4823]: I0126 15:05:33.277563 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:05:33 crc kubenswrapper[4823]: I0126 15:05:33.353439 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-h7n5b"] Jan 26 15:05:33 crc kubenswrapper[4823]: I0126 15:05:33.355847 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" podUID="6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" containerName="dnsmasq-dns" containerID="cri-o://6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598" gracePeriod=10 Jan 26 15:05:34 crc kubenswrapper[4823]: I0126 15:05:34.971689 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.052660 4823 generic.go:334] "Generic (PLEG): container finished" podID="6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" containerID="6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598" exitCode=0 Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.052719 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.052729 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" event={"ID":"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6","Type":"ContainerDied","Data":"6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598"} Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.052778 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-h7n5b" event={"ID":"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6","Type":"ContainerDied","Data":"3ce2481e03eae3e1ecba7cd41931ee132297465093abcfaaa9731736bfa1490f"} Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.052800 4823 scope.go:117] "RemoveContainer" containerID="6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.075780 4823 scope.go:117] "RemoveContainer" containerID="21754a67f5371007f59875a0e599a1170420bd2e4fafa4b77e7316635203121e" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.081608 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-ovsdbserver-sb\") pod \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.081716 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-config\") pod \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.081913 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-dns-svc\") pod \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.081970 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plkrv\" (UniqueName: \"kubernetes.io/projected/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-kube-api-access-plkrv\") pod \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\" (UID: \"6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6\") " Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.090954 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-kube-api-access-plkrv" (OuterVolumeSpecName: "kube-api-access-plkrv") pod "6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" (UID: "6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6"). InnerVolumeSpecName "kube-api-access-plkrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.101679 4823 scope.go:117] "RemoveContainer" containerID="6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598" Jan 26 15:05:35 crc kubenswrapper[4823]: E0126 15:05:35.103691 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598\": container with ID starting with 6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598 not found: ID does not exist" containerID="6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.103778 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598"} err="failed to get container status \"6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598\": rpc error: code = NotFound desc = could not find container \"6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598\": container with ID starting with 6b4dc3366f4c02a757d38904e8149367bdc391db5a28228677917be5396c1598 not found: ID does not exist" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.103822 4823 scope.go:117] "RemoveContainer" containerID="21754a67f5371007f59875a0e599a1170420bd2e4fafa4b77e7316635203121e" Jan 26 15:05:35 crc kubenswrapper[4823]: E0126 15:05:35.104492 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21754a67f5371007f59875a0e599a1170420bd2e4fafa4b77e7316635203121e\": container with ID starting with 21754a67f5371007f59875a0e599a1170420bd2e4fafa4b77e7316635203121e not found: ID does not exist" containerID="21754a67f5371007f59875a0e599a1170420bd2e4fafa4b77e7316635203121e" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.104560 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21754a67f5371007f59875a0e599a1170420bd2e4fafa4b77e7316635203121e"} err="failed to get container status \"21754a67f5371007f59875a0e599a1170420bd2e4fafa4b77e7316635203121e\": rpc error: code = NotFound desc = could not find container \"21754a67f5371007f59875a0e599a1170420bd2e4fafa4b77e7316635203121e\": container with ID starting with 21754a67f5371007f59875a0e599a1170420bd2e4fafa4b77e7316635203121e not found: ID does not exist" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.130959 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" (UID: "6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.138628 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-config" (OuterVolumeSpecName: "config") pod "6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" (UID: "6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.140934 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" (UID: "6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.184831 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.184887 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plkrv\" (UniqueName: \"kubernetes.io/projected/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-kube-api-access-plkrv\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.184905 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.184917 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.394732 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-h7n5b"] Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.401586 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-h7n5b"] Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.575576 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" path="/var/lib/kubelet/pods/6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6/volumes" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.941459 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-84q84"] Jan 26 15:05:35 crc kubenswrapper[4823]: E0126 15:05:35.941902 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a001b80-e9c3-4de1-9a2a-d368adac1975" containerName="init" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.941923 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a001b80-e9c3-4de1-9a2a-d368adac1975" containerName="init" Jan 26 15:05:35 crc kubenswrapper[4823]: E0126 15:05:35.941955 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a001b80-e9c3-4de1-9a2a-d368adac1975" containerName="dnsmasq-dns" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.941962 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a001b80-e9c3-4de1-9a2a-d368adac1975" containerName="dnsmasq-dns" Jan 26 15:05:35 crc kubenswrapper[4823]: E0126 15:05:35.941990 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" containerName="dnsmasq-dns" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.941999 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" containerName="dnsmasq-dns" Jan 26 15:05:35 crc kubenswrapper[4823]: E0126 15:05:35.942019 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" containerName="init" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.942025 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" containerName="init" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.942246 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6586c9b3-8e2e-4a3c-9bc9-cd71e7f57cc6" containerName="dnsmasq-dns" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.942266 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a001b80-e9c3-4de1-9a2a-d368adac1975" containerName="dnsmasq-dns" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.943141 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-84q84" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.946326 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 26 15:05:35 crc kubenswrapper[4823]: I0126 15:05:35.954083 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-84q84"] Jan 26 15:05:36 crc kubenswrapper[4823]: I0126 15:05:36.103483 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c541d2f3-f29a-4151-9fb4-031b967b8969-operator-scripts\") pod \"root-account-create-update-84q84\" (UID: \"c541d2f3-f29a-4151-9fb4-031b967b8969\") " pod="openstack/root-account-create-update-84q84" Jan 26 15:05:36 crc kubenswrapper[4823]: I0126 15:05:36.103545 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmwpc\" (UniqueName: \"kubernetes.io/projected/c541d2f3-f29a-4151-9fb4-031b967b8969-kube-api-access-jmwpc\") pod \"root-account-create-update-84q84\" (UID: \"c541d2f3-f29a-4151-9fb4-031b967b8969\") " pod="openstack/root-account-create-update-84q84" Jan 26 15:05:36 crc kubenswrapper[4823]: I0126 15:05:36.205728 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c541d2f3-f29a-4151-9fb4-031b967b8969-operator-scripts\") pod \"root-account-create-update-84q84\" (UID: \"c541d2f3-f29a-4151-9fb4-031b967b8969\") " pod="openstack/root-account-create-update-84q84" Jan 26 15:05:36 crc kubenswrapper[4823]: I0126 15:05:36.205805 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmwpc\" (UniqueName: \"kubernetes.io/projected/c541d2f3-f29a-4151-9fb4-031b967b8969-kube-api-access-jmwpc\") pod \"root-account-create-update-84q84\" (UID: \"c541d2f3-f29a-4151-9fb4-031b967b8969\") " pod="openstack/root-account-create-update-84q84" Jan 26 15:05:36 crc kubenswrapper[4823]: I0126 15:05:36.206576 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c541d2f3-f29a-4151-9fb4-031b967b8969-operator-scripts\") pod \"root-account-create-update-84q84\" (UID: \"c541d2f3-f29a-4151-9fb4-031b967b8969\") " pod="openstack/root-account-create-update-84q84" Jan 26 15:05:36 crc kubenswrapper[4823]: I0126 15:05:36.229863 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmwpc\" (UniqueName: \"kubernetes.io/projected/c541d2f3-f29a-4151-9fb4-031b967b8969-kube-api-access-jmwpc\") pod \"root-account-create-update-84q84\" (UID: \"c541d2f3-f29a-4151-9fb4-031b967b8969\") " pod="openstack/root-account-create-update-84q84" Jan 26 15:05:36 crc kubenswrapper[4823]: I0126 15:05:36.266236 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-84q84" Jan 26 15:05:36 crc kubenswrapper[4823]: I0126 15:05:36.742935 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-84q84"] Jan 26 15:05:37 crc kubenswrapper[4823]: I0126 15:05:37.076540 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-84q84" event={"ID":"c541d2f3-f29a-4151-9fb4-031b967b8969","Type":"ContainerStarted","Data":"4761f26d82e6fc4c9c9ced8d686425fa4265970e598f982b1fe5d3e9d152304a"} Jan 26 15:05:37 crc kubenswrapper[4823]: I0126 15:05:37.077021 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-84q84" event={"ID":"c541d2f3-f29a-4151-9fb4-031b967b8969","Type":"ContainerStarted","Data":"1ea9e5e48f7afea4b87e077409c99b346d3aae165df4668059dfada928d39f63"} Jan 26 15:05:37 crc kubenswrapper[4823]: I0126 15:05:37.095009 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-84q84" podStartSLOduration=2.094981353 podStartE2EDuration="2.094981353s" podCreationTimestamp="2026-01-26 15:05:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:05:37.0923013 +0000 UTC m=+1133.777764435" watchObservedRunningTime="2026-01-26 15:05:37.094981353 +0000 UTC m=+1133.780444458" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.453885 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-z4kl4"] Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.455740 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z4kl4" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.478600 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-z4kl4"] Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.554625 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5fed33-52f8-4a1a-9096-794711814cf5-operator-scripts\") pod \"keystone-db-create-z4kl4\" (UID: \"5d5fed33-52f8-4a1a-9096-794711814cf5\") " pod="openstack/keystone-db-create-z4kl4" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.554740 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nlch\" (UniqueName: \"kubernetes.io/projected/5d5fed33-52f8-4a1a-9096-794711814cf5-kube-api-access-7nlch\") pod \"keystone-db-create-z4kl4\" (UID: \"5d5fed33-52f8-4a1a-9096-794711814cf5\") " pod="openstack/keystone-db-create-z4kl4" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.600536 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e187-account-create-update-ktfqv"] Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.601714 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e187-account-create-update-ktfqv" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.606794 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.633464 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e187-account-create-update-ktfqv"] Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.657719 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5fed33-52f8-4a1a-9096-794711814cf5-operator-scripts\") pod \"keystone-db-create-z4kl4\" (UID: \"5d5fed33-52f8-4a1a-9096-794711814cf5\") " pod="openstack/keystone-db-create-z4kl4" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.657805 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd2xz\" (UniqueName: \"kubernetes.io/projected/18f5abdd-e891-46c4-87ef-b6446b54bf07-kube-api-access-dd2xz\") pod \"keystone-e187-account-create-update-ktfqv\" (UID: \"18f5abdd-e891-46c4-87ef-b6446b54bf07\") " pod="openstack/keystone-e187-account-create-update-ktfqv" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.657858 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f5abdd-e891-46c4-87ef-b6446b54bf07-operator-scripts\") pod \"keystone-e187-account-create-update-ktfqv\" (UID: \"18f5abdd-e891-46c4-87ef-b6446b54bf07\") " pod="openstack/keystone-e187-account-create-update-ktfqv" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.657932 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nlch\" (UniqueName: \"kubernetes.io/projected/5d5fed33-52f8-4a1a-9096-794711814cf5-kube-api-access-7nlch\") pod \"keystone-db-create-z4kl4\" (UID: \"5d5fed33-52f8-4a1a-9096-794711814cf5\") " pod="openstack/keystone-db-create-z4kl4" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.662316 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5fed33-52f8-4a1a-9096-794711814cf5-operator-scripts\") pod \"keystone-db-create-z4kl4\" (UID: \"5d5fed33-52f8-4a1a-9096-794711814cf5\") " pod="openstack/keystone-db-create-z4kl4" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.720642 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nlch\" (UniqueName: \"kubernetes.io/projected/5d5fed33-52f8-4a1a-9096-794711814cf5-kube-api-access-7nlch\") pod \"keystone-db-create-z4kl4\" (UID: \"5d5fed33-52f8-4a1a-9096-794711814cf5\") " pod="openstack/keystone-db-create-z4kl4" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.759699 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd2xz\" (UniqueName: \"kubernetes.io/projected/18f5abdd-e891-46c4-87ef-b6446b54bf07-kube-api-access-dd2xz\") pod \"keystone-e187-account-create-update-ktfqv\" (UID: \"18f5abdd-e891-46c4-87ef-b6446b54bf07\") " pod="openstack/keystone-e187-account-create-update-ktfqv" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.760285 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f5abdd-e891-46c4-87ef-b6446b54bf07-operator-scripts\") pod \"keystone-e187-account-create-update-ktfqv\" (UID: \"18f5abdd-e891-46c4-87ef-b6446b54bf07\") " pod="openstack/keystone-e187-account-create-update-ktfqv" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.761716 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f5abdd-e891-46c4-87ef-b6446b54bf07-operator-scripts\") pod \"keystone-e187-account-create-update-ktfqv\" (UID: \"18f5abdd-e891-46c4-87ef-b6446b54bf07\") " pod="openstack/keystone-e187-account-create-update-ktfqv" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.769856 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-wdc2m"] Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.771076 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wdc2m" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.776930 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wdc2m"] Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.782082 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd2xz\" (UniqueName: \"kubernetes.io/projected/18f5abdd-e891-46c4-87ef-b6446b54bf07-kube-api-access-dd2xz\") pod \"keystone-e187-account-create-update-ktfqv\" (UID: \"18f5abdd-e891-46c4-87ef-b6446b54bf07\") " pod="openstack/keystone-e187-account-create-update-ktfqv" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.863778 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nz48\" (UniqueName: \"kubernetes.io/projected/11a49820-f006-42b2-8441-525ca8601f6c-kube-api-access-8nz48\") pod \"placement-db-create-wdc2m\" (UID: \"11a49820-f006-42b2-8441-525ca8601f6c\") " pod="openstack/placement-db-create-wdc2m" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.864101 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a49820-f006-42b2-8441-525ca8601f6c-operator-scripts\") pod \"placement-db-create-wdc2m\" (UID: \"11a49820-f006-42b2-8441-525ca8601f6c\") " pod="openstack/placement-db-create-wdc2m" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.867330 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1276-account-create-update-7vdml"] Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.868458 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1276-account-create-update-7vdml" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.873034 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z4kl4" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.873136 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.881892 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1276-account-create-update-7vdml"] Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.965011 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e187-account-create-update-ktfqv" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.966285 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nz48\" (UniqueName: \"kubernetes.io/projected/11a49820-f006-42b2-8441-525ca8601f6c-kube-api-access-8nz48\") pod \"placement-db-create-wdc2m\" (UID: \"11a49820-f006-42b2-8441-525ca8601f6c\") " pod="openstack/placement-db-create-wdc2m" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.966485 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bdp8\" (UniqueName: \"kubernetes.io/projected/e935f38b-5459-4bcc-a9f0-50e5cecef101-kube-api-access-5bdp8\") pod \"placement-1276-account-create-update-7vdml\" (UID: \"e935f38b-5459-4bcc-a9f0-50e5cecef101\") " pod="openstack/placement-1276-account-create-update-7vdml" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.966732 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a49820-f006-42b2-8441-525ca8601f6c-operator-scripts\") pod \"placement-db-create-wdc2m\" (UID: \"11a49820-f006-42b2-8441-525ca8601f6c\") " pod="openstack/placement-db-create-wdc2m" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.966775 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e935f38b-5459-4bcc-a9f0-50e5cecef101-operator-scripts\") pod \"placement-1276-account-create-update-7vdml\" (UID: \"e935f38b-5459-4bcc-a9f0-50e5cecef101\") " pod="openstack/placement-1276-account-create-update-7vdml" Jan 26 15:05:38 crc kubenswrapper[4823]: I0126 15:05:38.968327 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a49820-f006-42b2-8441-525ca8601f6c-operator-scripts\") pod \"placement-db-create-wdc2m\" (UID: \"11a49820-f006-42b2-8441-525ca8601f6c\") " pod="openstack/placement-db-create-wdc2m" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.026245 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nz48\" (UniqueName: \"kubernetes.io/projected/11a49820-f006-42b2-8441-525ca8601f6c-kube-api-access-8nz48\") pod \"placement-db-create-wdc2m\" (UID: \"11a49820-f006-42b2-8441-525ca8601f6c\") " pod="openstack/placement-db-create-wdc2m" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.068912 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bdp8\" (UniqueName: \"kubernetes.io/projected/e935f38b-5459-4bcc-a9f0-50e5cecef101-kube-api-access-5bdp8\") pod \"placement-1276-account-create-update-7vdml\" (UID: \"e935f38b-5459-4bcc-a9f0-50e5cecef101\") " pod="openstack/placement-1276-account-create-update-7vdml" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.069057 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e935f38b-5459-4bcc-a9f0-50e5cecef101-operator-scripts\") pod \"placement-1276-account-create-update-7vdml\" (UID: \"e935f38b-5459-4bcc-a9f0-50e5cecef101\") " pod="openstack/placement-1276-account-create-update-7vdml" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.070157 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e935f38b-5459-4bcc-a9f0-50e5cecef101-operator-scripts\") pod \"placement-1276-account-create-update-7vdml\" (UID: \"e935f38b-5459-4bcc-a9f0-50e5cecef101\") " pod="openstack/placement-1276-account-create-update-7vdml" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.095952 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bdp8\" (UniqueName: \"kubernetes.io/projected/e935f38b-5459-4bcc-a9f0-50e5cecef101-kube-api-access-5bdp8\") pod \"placement-1276-account-create-update-7vdml\" (UID: \"e935f38b-5459-4bcc-a9f0-50e5cecef101\") " pod="openstack/placement-1276-account-create-update-7vdml" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.110537 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gpxrj"] Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.120470 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wdc2m" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.122878 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gpxrj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.129285 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gpxrj"] Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.171407 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwr5k\" (UniqueName: \"kubernetes.io/projected/3eb6be81-80b7-40c3-a17e-f09cc5c0715f-kube-api-access-xwr5k\") pod \"glance-db-create-gpxrj\" (UID: \"3eb6be81-80b7-40c3-a17e-f09cc5c0715f\") " pod="openstack/glance-db-create-gpxrj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.171973 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb6be81-80b7-40c3-a17e-f09cc5c0715f-operator-scripts\") pod \"glance-db-create-gpxrj\" (UID: \"3eb6be81-80b7-40c3-a17e-f09cc5c0715f\") " pod="openstack/glance-db-create-gpxrj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.196026 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1276-account-create-update-7vdml" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.235259 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a730-account-create-update-4grxj"] Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.236952 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a730-account-create-update-4grxj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.245124 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.247812 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a730-account-create-update-4grxj"] Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.273523 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwr5k\" (UniqueName: \"kubernetes.io/projected/3eb6be81-80b7-40c3-a17e-f09cc5c0715f-kube-api-access-xwr5k\") pod \"glance-db-create-gpxrj\" (UID: \"3eb6be81-80b7-40c3-a17e-f09cc5c0715f\") " pod="openstack/glance-db-create-gpxrj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.273585 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb6be81-80b7-40c3-a17e-f09cc5c0715f-operator-scripts\") pod \"glance-db-create-gpxrj\" (UID: \"3eb6be81-80b7-40c3-a17e-f09cc5c0715f\") " pod="openstack/glance-db-create-gpxrj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.274637 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb6be81-80b7-40c3-a17e-f09cc5c0715f-operator-scripts\") pod \"glance-db-create-gpxrj\" (UID: \"3eb6be81-80b7-40c3-a17e-f09cc5c0715f\") " pod="openstack/glance-db-create-gpxrj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.294436 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwr5k\" (UniqueName: \"kubernetes.io/projected/3eb6be81-80b7-40c3-a17e-f09cc5c0715f-kube-api-access-xwr5k\") pod \"glance-db-create-gpxrj\" (UID: \"3eb6be81-80b7-40c3-a17e-f09cc5c0715f\") " pod="openstack/glance-db-create-gpxrj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.375513 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac560032-e524-45c4-bc11-a960f50c4f07-operator-scripts\") pod \"glance-a730-account-create-update-4grxj\" (UID: \"ac560032-e524-45c4-bc11-a960f50c4f07\") " pod="openstack/glance-a730-account-create-update-4grxj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.375601 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8j85\" (UniqueName: \"kubernetes.io/projected/ac560032-e524-45c4-bc11-a960f50c4f07-kube-api-access-m8j85\") pod \"glance-a730-account-create-update-4grxj\" (UID: \"ac560032-e524-45c4-bc11-a960f50c4f07\") " pod="openstack/glance-a730-account-create-update-4grxj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.391724 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-z4kl4"] Jan 26 15:05:39 crc kubenswrapper[4823]: W0126 15:05:39.402172 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d5fed33_52f8_4a1a_9096_794711814cf5.slice/crio-37594f2a179ae84428fa1b6b09b2375139abd318090601d0cef5ec863d347e53 WatchSource:0}: Error finding container 37594f2a179ae84428fa1b6b09b2375139abd318090601d0cef5ec863d347e53: Status 404 returned error can't find the container with id 37594f2a179ae84428fa1b6b09b2375139abd318090601d0cef5ec863d347e53 Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.447432 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gpxrj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.478187 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac560032-e524-45c4-bc11-a960f50c4f07-operator-scripts\") pod \"glance-a730-account-create-update-4grxj\" (UID: \"ac560032-e524-45c4-bc11-a960f50c4f07\") " pod="openstack/glance-a730-account-create-update-4grxj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.478257 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8j85\" (UniqueName: \"kubernetes.io/projected/ac560032-e524-45c4-bc11-a960f50c4f07-kube-api-access-m8j85\") pod \"glance-a730-account-create-update-4grxj\" (UID: \"ac560032-e524-45c4-bc11-a960f50c4f07\") " pod="openstack/glance-a730-account-create-update-4grxj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.479160 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac560032-e524-45c4-bc11-a960f50c4f07-operator-scripts\") pod \"glance-a730-account-create-update-4grxj\" (UID: \"ac560032-e524-45c4-bc11-a960f50c4f07\") " pod="openstack/glance-a730-account-create-update-4grxj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.499260 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8j85\" (UniqueName: \"kubernetes.io/projected/ac560032-e524-45c4-bc11-a960f50c4f07-kube-api-access-m8j85\") pod \"glance-a730-account-create-update-4grxj\" (UID: \"ac560032-e524-45c4-bc11-a960f50c4f07\") " pod="openstack/glance-a730-account-create-update-4grxj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.564178 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a730-account-create-update-4grxj" Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.594667 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e187-account-create-update-ktfqv"] Jan 26 15:05:39 crc kubenswrapper[4823]: W0126 15:05:39.615163 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18f5abdd_e891_46c4_87ef_b6446b54bf07.slice/crio-e8f33167e66c61c9418ae08d9d2ff6fb8d67217b1c805b7cb3ceeb69f25612d8 WatchSource:0}: Error finding container e8f33167e66c61c9418ae08d9d2ff6fb8d67217b1c805b7cb3ceeb69f25612d8: Status 404 returned error can't find the container with id e8f33167e66c61c9418ae08d9d2ff6fb8d67217b1c805b7cb3ceeb69f25612d8 Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.703457 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-wdc2m"] Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.794225 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1276-account-create-update-7vdml"] Jan 26 15:05:39 crc kubenswrapper[4823]: I0126 15:05:39.811565 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gpxrj"] Jan 26 15:05:40 crc kubenswrapper[4823]: W0126 15:05:40.011440 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode935f38b_5459_4bcc_a9f0_50e5cecef101.slice/crio-ae12172ddf39bd523e6409ff35fb31a8642430f6cb917a56241fa8e94f56564a WatchSource:0}: Error finding container ae12172ddf39bd523e6409ff35fb31a8642430f6cb917a56241fa8e94f56564a: Status 404 returned error can't find the container with id ae12172ddf39bd523e6409ff35fb31a8642430f6cb917a56241fa8e94f56564a Jan 26 15:05:40 crc kubenswrapper[4823]: I0126 15:05:40.132167 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e187-account-create-update-ktfqv" event={"ID":"18f5abdd-e891-46c4-87ef-b6446b54bf07","Type":"ContainerStarted","Data":"e8f33167e66c61c9418ae08d9d2ff6fb8d67217b1c805b7cb3ceeb69f25612d8"} Jan 26 15:05:40 crc kubenswrapper[4823]: I0126 15:05:40.133903 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1276-account-create-update-7vdml" event={"ID":"e935f38b-5459-4bcc-a9f0-50e5cecef101","Type":"ContainerStarted","Data":"ae12172ddf39bd523e6409ff35fb31a8642430f6cb917a56241fa8e94f56564a"} Jan 26 15:05:40 crc kubenswrapper[4823]: I0126 15:05:40.139309 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wdc2m" event={"ID":"11a49820-f006-42b2-8441-525ca8601f6c","Type":"ContainerStarted","Data":"9b9b7675f5a84a74593cab5bc6c92396333fa563eab397010b4724b3d9c59b3c"} Jan 26 15:05:40 crc kubenswrapper[4823]: I0126 15:05:40.143702 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gpxrj" event={"ID":"3eb6be81-80b7-40c3-a17e-f09cc5c0715f","Type":"ContainerStarted","Data":"cd83418878c1331d08ddaa0a1c9e8e0429c9d37053c7add23db8c5b5b6eaf0c8"} Jan 26 15:05:40 crc kubenswrapper[4823]: I0126 15:05:40.152701 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z4kl4" event={"ID":"5d5fed33-52f8-4a1a-9096-794711814cf5","Type":"ContainerStarted","Data":"5e05c66a05b73f1d03b73657de8bcb53fe8aa6bccdf2cf98b74431e4e785a48a"} Jan 26 15:05:40 crc kubenswrapper[4823]: I0126 15:05:40.152781 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z4kl4" event={"ID":"5d5fed33-52f8-4a1a-9096-794711814cf5","Type":"ContainerStarted","Data":"37594f2a179ae84428fa1b6b09b2375139abd318090601d0cef5ec863d347e53"} Jan 26 15:05:40 crc kubenswrapper[4823]: I0126 15:05:40.174447 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-z4kl4" podStartSLOduration=2.17441707 podStartE2EDuration="2.17441707s" podCreationTimestamp="2026-01-26 15:05:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:05:40.173622889 +0000 UTC m=+1136.859085994" watchObservedRunningTime="2026-01-26 15:05:40.17441707 +0000 UTC m=+1136.859880176" Jan 26 15:05:40 crc kubenswrapper[4823]: I0126 15:05:40.472952 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a730-account-create-update-4grxj"] Jan 26 15:05:40 crc kubenswrapper[4823]: W0126 15:05:40.480886 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac560032_e524_45c4_bc11_a960f50c4f07.slice/crio-704c6750f49d761a4e1b23e054f36f61c81641aa5618a1c8ddacda60aec62f21 WatchSource:0}: Error finding container 704c6750f49d761a4e1b23e054f36f61c81641aa5618a1c8ddacda60aec62f21: Status 404 returned error can't find the container with id 704c6750f49d761a4e1b23e054f36f61c81641aa5618a1c8ddacda60aec62f21 Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.166190 4823 generic.go:334] "Generic (PLEG): container finished" podID="3eb6be81-80b7-40c3-a17e-f09cc5c0715f" containerID="466100ee52c96eb0a38bf74ee44711b88ab0aeab29df34f0dc7120cc8f0d56d2" exitCode=0 Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.166263 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gpxrj" event={"ID":"3eb6be81-80b7-40c3-a17e-f09cc5c0715f","Type":"ContainerDied","Data":"466100ee52c96eb0a38bf74ee44711b88ab0aeab29df34f0dc7120cc8f0d56d2"} Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.168449 4823 generic.go:334] "Generic (PLEG): container finished" podID="5d5fed33-52f8-4a1a-9096-794711814cf5" containerID="5e05c66a05b73f1d03b73657de8bcb53fe8aa6bccdf2cf98b74431e4e785a48a" exitCode=0 Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.168524 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z4kl4" event={"ID":"5d5fed33-52f8-4a1a-9096-794711814cf5","Type":"ContainerDied","Data":"5e05c66a05b73f1d03b73657de8bcb53fe8aa6bccdf2cf98b74431e4e785a48a"} Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.170798 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac560032-e524-45c4-bc11-a960f50c4f07" containerID="e4946988874a34a93ea4ddb0e1df44a2b56b4ef55e05d0bf6306865899a8d489" exitCode=0 Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.170913 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a730-account-create-update-4grxj" event={"ID":"ac560032-e524-45c4-bc11-a960f50c4f07","Type":"ContainerDied","Data":"e4946988874a34a93ea4ddb0e1df44a2b56b4ef55e05d0bf6306865899a8d489"} Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.170983 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a730-account-create-update-4grxj" event={"ID":"ac560032-e524-45c4-bc11-a960f50c4f07","Type":"ContainerStarted","Data":"704c6750f49d761a4e1b23e054f36f61c81641aa5618a1c8ddacda60aec62f21"} Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.173516 4823 generic.go:334] "Generic (PLEG): container finished" podID="18f5abdd-e891-46c4-87ef-b6446b54bf07" containerID="73c3e7a5a99229b0ad8e4daa3f0a0a7857d02c7977e5f5c32c4a12fceadd5038" exitCode=0 Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.173565 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e187-account-create-update-ktfqv" event={"ID":"18f5abdd-e891-46c4-87ef-b6446b54bf07","Type":"ContainerDied","Data":"73c3e7a5a99229b0ad8e4daa3f0a0a7857d02c7977e5f5c32c4a12fceadd5038"} Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.175332 4823 generic.go:334] "Generic (PLEG): container finished" podID="e935f38b-5459-4bcc-a9f0-50e5cecef101" containerID="5ddced960be17ed81d13233645b6eeb12796e1ebabc5f2916ef8eb859ca99c57" exitCode=0 Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.175537 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1276-account-create-update-7vdml" event={"ID":"e935f38b-5459-4bcc-a9f0-50e5cecef101","Type":"ContainerDied","Data":"5ddced960be17ed81d13233645b6eeb12796e1ebabc5f2916ef8eb859ca99c57"} Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.183807 4823 generic.go:334] "Generic (PLEG): container finished" podID="11a49820-f006-42b2-8441-525ca8601f6c" containerID="1390be81e78856b4dc56442c7c082b9c4d9ff0d32b7d3e8fbb68a2596f2a3248" exitCode=0 Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.183908 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wdc2m" event={"ID":"11a49820-f006-42b2-8441-525ca8601f6c","Type":"ContainerDied","Data":"1390be81e78856b4dc56442c7c082b9c4d9ff0d32b7d3e8fbb68a2596f2a3248"} Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.192121 4823 generic.go:334] "Generic (PLEG): container finished" podID="c541d2f3-f29a-4151-9fb4-031b967b8969" containerID="4761f26d82e6fc4c9c9ced8d686425fa4265970e598f982b1fe5d3e9d152304a" exitCode=0 Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.192173 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-84q84" event={"ID":"c541d2f3-f29a-4151-9fb4-031b967b8969","Type":"ContainerDied","Data":"4761f26d82e6fc4c9c9ced8d686425fa4265970e598f982b1fe5d3e9d152304a"} Jan 26 15:05:41 crc kubenswrapper[4823]: I0126 15:05:41.195176 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.644007 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1276-account-create-update-7vdml" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.775116 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e935f38b-5459-4bcc-a9f0-50e5cecef101-operator-scripts\") pod \"e935f38b-5459-4bcc-a9f0-50e5cecef101\" (UID: \"e935f38b-5459-4bcc-a9f0-50e5cecef101\") " Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.775290 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bdp8\" (UniqueName: \"kubernetes.io/projected/e935f38b-5459-4bcc-a9f0-50e5cecef101-kube-api-access-5bdp8\") pod \"e935f38b-5459-4bcc-a9f0-50e5cecef101\" (UID: \"e935f38b-5459-4bcc-a9f0-50e5cecef101\") " Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.776062 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e935f38b-5459-4bcc-a9f0-50e5cecef101-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e935f38b-5459-4bcc-a9f0-50e5cecef101" (UID: "e935f38b-5459-4bcc-a9f0-50e5cecef101"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.793022 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e935f38b-5459-4bcc-a9f0-50e5cecef101-kube-api-access-5bdp8" (OuterVolumeSpecName: "kube-api-access-5bdp8") pod "e935f38b-5459-4bcc-a9f0-50e5cecef101" (UID: "e935f38b-5459-4bcc-a9f0-50e5cecef101"). InnerVolumeSpecName "kube-api-access-5bdp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.861303 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z4kl4" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.873450 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-84q84" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.879780 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bdp8\" (UniqueName: \"kubernetes.io/projected/e935f38b-5459-4bcc-a9f0-50e5cecef101-kube-api-access-5bdp8\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.879831 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e935f38b-5459-4bcc-a9f0-50e5cecef101-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.881273 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gpxrj" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.907194 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a730-account-create-update-4grxj" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.921459 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wdc2m" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.933476 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e187-account-create-update-ktfqv" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.980584 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb6be81-80b7-40c3-a17e-f09cc5c0715f-operator-scripts\") pod \"3eb6be81-80b7-40c3-a17e-f09cc5c0715f\" (UID: \"3eb6be81-80b7-40c3-a17e-f09cc5c0715f\") " Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.981122 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwr5k\" (UniqueName: \"kubernetes.io/projected/3eb6be81-80b7-40c3-a17e-f09cc5c0715f-kube-api-access-xwr5k\") pod \"3eb6be81-80b7-40c3-a17e-f09cc5c0715f\" (UID: \"3eb6be81-80b7-40c3-a17e-f09cc5c0715f\") " Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.981201 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c541d2f3-f29a-4151-9fb4-031b967b8969-operator-scripts\") pod \"c541d2f3-f29a-4151-9fb4-031b967b8969\" (UID: \"c541d2f3-f29a-4151-9fb4-031b967b8969\") " Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.981275 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmwpc\" (UniqueName: \"kubernetes.io/projected/c541d2f3-f29a-4151-9fb4-031b967b8969-kube-api-access-jmwpc\") pod \"c541d2f3-f29a-4151-9fb4-031b967b8969\" (UID: \"c541d2f3-f29a-4151-9fb4-031b967b8969\") " Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.981346 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5fed33-52f8-4a1a-9096-794711814cf5-operator-scripts\") pod \"5d5fed33-52f8-4a1a-9096-794711814cf5\" (UID: \"5d5fed33-52f8-4a1a-9096-794711814cf5\") " Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.981399 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eb6be81-80b7-40c3-a17e-f09cc5c0715f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3eb6be81-80b7-40c3-a17e-f09cc5c0715f" (UID: "3eb6be81-80b7-40c3-a17e-f09cc5c0715f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.981431 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nlch\" (UniqueName: \"kubernetes.io/projected/5d5fed33-52f8-4a1a-9096-794711814cf5-kube-api-access-7nlch\") pod \"5d5fed33-52f8-4a1a-9096-794711814cf5\" (UID: \"5d5fed33-52f8-4a1a-9096-794711814cf5\") " Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.982018 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c541d2f3-f29a-4151-9fb4-031b967b8969-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c541d2f3-f29a-4151-9fb4-031b967b8969" (UID: "c541d2f3-f29a-4151-9fb4-031b967b8969"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.983446 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d5fed33-52f8-4a1a-9096-794711814cf5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d5fed33-52f8-4a1a-9096-794711814cf5" (UID: "5d5fed33-52f8-4a1a-9096-794711814cf5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.983784 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5fed33-52f8-4a1a-9096-794711814cf5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.983812 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3eb6be81-80b7-40c3-a17e-f09cc5c0715f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.983823 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c541d2f3-f29a-4151-9fb4-031b967b8969-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.986098 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c541d2f3-f29a-4151-9fb4-031b967b8969-kube-api-access-jmwpc" (OuterVolumeSpecName: "kube-api-access-jmwpc") pod "c541d2f3-f29a-4151-9fb4-031b967b8969" (UID: "c541d2f3-f29a-4151-9fb4-031b967b8969"). InnerVolumeSpecName "kube-api-access-jmwpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.993652 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eb6be81-80b7-40c3-a17e-f09cc5c0715f-kube-api-access-xwr5k" (OuterVolumeSpecName: "kube-api-access-xwr5k") pod "3eb6be81-80b7-40c3-a17e-f09cc5c0715f" (UID: "3eb6be81-80b7-40c3-a17e-f09cc5c0715f"). InnerVolumeSpecName "kube-api-access-xwr5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:05:42 crc kubenswrapper[4823]: I0126 15:05:42.995683 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d5fed33-52f8-4a1a-9096-794711814cf5-kube-api-access-7nlch" (OuterVolumeSpecName: "kube-api-access-7nlch") pod "5d5fed33-52f8-4a1a-9096-794711814cf5" (UID: "5d5fed33-52f8-4a1a-9096-794711814cf5"). InnerVolumeSpecName "kube-api-access-7nlch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.085329 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8j85\" (UniqueName: \"kubernetes.io/projected/ac560032-e524-45c4-bc11-a960f50c4f07-kube-api-access-m8j85\") pod \"ac560032-e524-45c4-bc11-a960f50c4f07\" (UID: \"ac560032-e524-45c4-bc11-a960f50c4f07\") " Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.085534 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f5abdd-e891-46c4-87ef-b6446b54bf07-operator-scripts\") pod \"18f5abdd-e891-46c4-87ef-b6446b54bf07\" (UID: \"18f5abdd-e891-46c4-87ef-b6446b54bf07\") " Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.085632 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd2xz\" (UniqueName: \"kubernetes.io/projected/18f5abdd-e891-46c4-87ef-b6446b54bf07-kube-api-access-dd2xz\") pod \"18f5abdd-e891-46c4-87ef-b6446b54bf07\" (UID: \"18f5abdd-e891-46c4-87ef-b6446b54bf07\") " Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.085728 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a49820-f006-42b2-8441-525ca8601f6c-operator-scripts\") pod \"11a49820-f006-42b2-8441-525ca8601f6c\" (UID: \"11a49820-f006-42b2-8441-525ca8601f6c\") " Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.085754 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac560032-e524-45c4-bc11-a960f50c4f07-operator-scripts\") pod \"ac560032-e524-45c4-bc11-a960f50c4f07\" (UID: \"ac560032-e524-45c4-bc11-a960f50c4f07\") " Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.085830 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nz48\" (UniqueName: \"kubernetes.io/projected/11a49820-f006-42b2-8441-525ca8601f6c-kube-api-access-8nz48\") pod \"11a49820-f006-42b2-8441-525ca8601f6c\" (UID: \"11a49820-f006-42b2-8441-525ca8601f6c\") " Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.086310 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11a49820-f006-42b2-8441-525ca8601f6c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11a49820-f006-42b2-8441-525ca8601f6c" (UID: "11a49820-f006-42b2-8441-525ca8601f6c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.086385 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f5abdd-e891-46c4-87ef-b6446b54bf07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18f5abdd-e891-46c4-87ef-b6446b54bf07" (UID: "18f5abdd-e891-46c4-87ef-b6446b54bf07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.086569 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac560032-e524-45c4-bc11-a960f50c4f07-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ac560032-e524-45c4-bc11-a960f50c4f07" (UID: "ac560032-e524-45c4-bc11-a960f50c4f07"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.087251 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18f5abdd-e891-46c4-87ef-b6446b54bf07-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.087280 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nlch\" (UniqueName: \"kubernetes.io/projected/5d5fed33-52f8-4a1a-9096-794711814cf5-kube-api-access-7nlch\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.087302 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11a49820-f006-42b2-8441-525ca8601f6c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.087320 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac560032-e524-45c4-bc11-a960f50c4f07-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.087337 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwr5k\" (UniqueName: \"kubernetes.io/projected/3eb6be81-80b7-40c3-a17e-f09cc5c0715f-kube-api-access-xwr5k\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.087354 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmwpc\" (UniqueName: \"kubernetes.io/projected/c541d2f3-f29a-4151-9fb4-031b967b8969-kube-api-access-jmwpc\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.090119 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac560032-e524-45c4-bc11-a960f50c4f07-kube-api-access-m8j85" (OuterVolumeSpecName: "kube-api-access-m8j85") pod "ac560032-e524-45c4-bc11-a960f50c4f07" (UID: "ac560032-e524-45c4-bc11-a960f50c4f07"). InnerVolumeSpecName "kube-api-access-m8j85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.090266 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11a49820-f006-42b2-8441-525ca8601f6c-kube-api-access-8nz48" (OuterVolumeSpecName: "kube-api-access-8nz48") pod "11a49820-f006-42b2-8441-525ca8601f6c" (UID: "11a49820-f006-42b2-8441-525ca8601f6c"). InnerVolumeSpecName "kube-api-access-8nz48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.090782 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f5abdd-e891-46c4-87ef-b6446b54bf07-kube-api-access-dd2xz" (OuterVolumeSpecName: "kube-api-access-dd2xz") pod "18f5abdd-e891-46c4-87ef-b6446b54bf07" (UID: "18f5abdd-e891-46c4-87ef-b6446b54bf07"). InnerVolumeSpecName "kube-api-access-dd2xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.190596 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd2xz\" (UniqueName: \"kubernetes.io/projected/18f5abdd-e891-46c4-87ef-b6446b54bf07-kube-api-access-dd2xz\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.190653 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nz48\" (UniqueName: \"kubernetes.io/projected/11a49820-f006-42b2-8441-525ca8601f6c-kube-api-access-8nz48\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.190667 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8j85\" (UniqueName: \"kubernetes.io/projected/ac560032-e524-45c4-bc11-a960f50c4f07-kube-api-access-m8j85\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.222035 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-84q84" event={"ID":"c541d2f3-f29a-4151-9fb4-031b967b8969","Type":"ContainerDied","Data":"1ea9e5e48f7afea4b87e077409c99b346d3aae165df4668059dfada928d39f63"} Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.222092 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ea9e5e48f7afea4b87e077409c99b346d3aae165df4668059dfada928d39f63" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.222188 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-84q84" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.228989 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gpxrj" event={"ID":"3eb6be81-80b7-40c3-a17e-f09cc5c0715f","Type":"ContainerDied","Data":"cd83418878c1331d08ddaa0a1c9e8e0429c9d37053c7add23db8c5b5b6eaf0c8"} Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.229044 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd83418878c1331d08ddaa0a1c9e8e0429c9d37053c7add23db8c5b5b6eaf0c8" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.229051 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gpxrj" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.242034 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-z4kl4" event={"ID":"5d5fed33-52f8-4a1a-9096-794711814cf5","Type":"ContainerDied","Data":"37594f2a179ae84428fa1b6b09b2375139abd318090601d0cef5ec863d347e53"} Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.242083 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37594f2a179ae84428fa1b6b09b2375139abd318090601d0cef5ec863d347e53" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.242157 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-z4kl4" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.262748 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a730-account-create-update-4grxj" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.262777 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a730-account-create-update-4grxj" event={"ID":"ac560032-e524-45c4-bc11-a960f50c4f07","Type":"ContainerDied","Data":"704c6750f49d761a4e1b23e054f36f61c81641aa5618a1c8ddacda60aec62f21"} Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.262858 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="704c6750f49d761a4e1b23e054f36f61c81641aa5618a1c8ddacda60aec62f21" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.276188 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e187-account-create-update-ktfqv" event={"ID":"18f5abdd-e891-46c4-87ef-b6446b54bf07","Type":"ContainerDied","Data":"e8f33167e66c61c9418ae08d9d2ff6fb8d67217b1c805b7cb3ceeb69f25612d8"} Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.276255 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8f33167e66c61c9418ae08d9d2ff6fb8d67217b1c805b7cb3ceeb69f25612d8" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.276411 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e187-account-create-update-ktfqv" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.293276 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1276-account-create-update-7vdml" event={"ID":"e935f38b-5459-4bcc-a9f0-50e5cecef101","Type":"ContainerDied","Data":"ae12172ddf39bd523e6409ff35fb31a8642430f6cb917a56241fa8e94f56564a"} Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.293336 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae12172ddf39bd523e6409ff35fb31a8642430f6cb917a56241fa8e94f56564a" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.293478 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1276-account-create-update-7vdml" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.305808 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-wdc2m" event={"ID":"11a49820-f006-42b2-8441-525ca8601f6c","Type":"ContainerDied","Data":"9b9b7675f5a84a74593cab5bc6c92396333fa563eab397010b4724b3d9c59b3c"} Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.305910 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b9b7675f5a84a74593cab5bc6c92396333fa563eab397010b4724b3d9c59b3c" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.305984 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-wdc2m" Jan 26 15:05:43 crc kubenswrapper[4823]: I0126 15:05:43.393013 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.437609 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-gp7n5"] Jan 26 15:05:44 crc kubenswrapper[4823]: E0126 15:05:44.438505 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eb6be81-80b7-40c3-a17e-f09cc5c0715f" containerName="mariadb-database-create" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438526 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eb6be81-80b7-40c3-a17e-f09cc5c0715f" containerName="mariadb-database-create" Jan 26 15:05:44 crc kubenswrapper[4823]: E0126 15:05:44.438544 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac560032-e524-45c4-bc11-a960f50c4f07" containerName="mariadb-account-create-update" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438551 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac560032-e524-45c4-bc11-a960f50c4f07" containerName="mariadb-account-create-update" Jan 26 15:05:44 crc kubenswrapper[4823]: E0126 15:05:44.438571 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e935f38b-5459-4bcc-a9f0-50e5cecef101" containerName="mariadb-account-create-update" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438577 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e935f38b-5459-4bcc-a9f0-50e5cecef101" containerName="mariadb-account-create-update" Jan 26 15:05:44 crc kubenswrapper[4823]: E0126 15:05:44.438589 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5fed33-52f8-4a1a-9096-794711814cf5" containerName="mariadb-database-create" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438594 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5fed33-52f8-4a1a-9096-794711814cf5" containerName="mariadb-database-create" Jan 26 15:05:44 crc kubenswrapper[4823]: E0126 15:05:44.438607 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11a49820-f006-42b2-8441-525ca8601f6c" containerName="mariadb-database-create" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438613 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="11a49820-f006-42b2-8441-525ca8601f6c" containerName="mariadb-database-create" Jan 26 15:05:44 crc kubenswrapper[4823]: E0126 15:05:44.438624 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f5abdd-e891-46c4-87ef-b6446b54bf07" containerName="mariadb-account-create-update" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438630 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f5abdd-e891-46c4-87ef-b6446b54bf07" containerName="mariadb-account-create-update" Jan 26 15:05:44 crc kubenswrapper[4823]: E0126 15:05:44.438651 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c541d2f3-f29a-4151-9fb4-031b967b8969" containerName="mariadb-account-create-update" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438659 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c541d2f3-f29a-4151-9fb4-031b967b8969" containerName="mariadb-account-create-update" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438832 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e935f38b-5459-4bcc-a9f0-50e5cecef101" containerName="mariadb-account-create-update" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438849 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c541d2f3-f29a-4151-9fb4-031b967b8969" containerName="mariadb-account-create-update" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438859 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="11a49820-f006-42b2-8441-525ca8601f6c" containerName="mariadb-database-create" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438872 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eb6be81-80b7-40c3-a17e-f09cc5c0715f" containerName="mariadb-database-create" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438883 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f5abdd-e891-46c4-87ef-b6446b54bf07" containerName="mariadb-account-create-update" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438895 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac560032-e524-45c4-bc11-a960f50c4f07" containerName="mariadb-account-create-update" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.438908 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d5fed33-52f8-4a1a-9096-794711814cf5" containerName="mariadb-database-create" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.439746 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.442826 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nm7mr" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.445056 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.460801 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gp7n5"] Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.623527 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxk4w\" (UniqueName: \"kubernetes.io/projected/461e74af-b7a9-4451-a07d-42f47a806286-kube-api-access-nxk4w\") pod \"glance-db-sync-gp7n5\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.623692 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-combined-ca-bundle\") pod \"glance-db-sync-gp7n5\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.623763 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-config-data\") pod \"glance-db-sync-gp7n5\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.623796 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-db-sync-config-data\") pod \"glance-db-sync-gp7n5\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.725073 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-combined-ca-bundle\") pod \"glance-db-sync-gp7n5\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.726068 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-config-data\") pod \"glance-db-sync-gp7n5\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.726099 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-db-sync-config-data\") pod \"glance-db-sync-gp7n5\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.726165 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxk4w\" (UniqueName: \"kubernetes.io/projected/461e74af-b7a9-4451-a07d-42f47a806286-kube-api-access-nxk4w\") pod \"glance-db-sync-gp7n5\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.733384 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-combined-ca-bundle\") pod \"glance-db-sync-gp7n5\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.734046 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-config-data\") pod \"glance-db-sync-gp7n5\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.734548 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-db-sync-config-data\") pod \"glance-db-sync-gp7n5\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.747237 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxk4w\" (UniqueName: \"kubernetes.io/projected/461e74af-b7a9-4451-a07d-42f47a806286-kube-api-access-nxk4w\") pod \"glance-db-sync-gp7n5\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:44 crc kubenswrapper[4823]: I0126 15:05:44.761707 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gp7n5" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.156841 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-gp7n5"] Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.329255 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gp7n5" event={"ID":"461e74af-b7a9-4451-a07d-42f47a806286","Type":"ContainerStarted","Data":"6611467068a848880b7bad7528052bb5f4cd3dad87da4d73c08c195f495beb65"} Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.334312 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-s4g2z" podUID="366c188c-7e0f-4ac6-8fa6-7a466714d0ea" containerName="ovn-controller" probeResult="failure" output=< Jan 26 15:05:45 crc kubenswrapper[4823]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 26 15:05:45 crc kubenswrapper[4823]: > Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.347582 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.356477 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-twc9z" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.596077 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-s4g2z-config-hf748"] Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.597234 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.602830 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.615843 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s4g2z-config-hf748"] Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.745812 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d3a068dc-2303-4103-bbc0-eb042becbf5c-additional-scripts\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.745888 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/d3a068dc-2303-4103-bbc0-eb042becbf5c-kube-api-access-zhc87\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.745932 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-log-ovn\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.745966 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-run\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.746018 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3a068dc-2303-4103-bbc0-eb042becbf5c-scripts\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.746072 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-run-ovn\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.848023 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-run-ovn\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.848091 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d3a068dc-2303-4103-bbc0-eb042becbf5c-additional-scripts\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.848124 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/d3a068dc-2303-4103-bbc0-eb042becbf5c-kube-api-access-zhc87\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.848153 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-log-ovn\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.848179 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-run\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.848229 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3a068dc-2303-4103-bbc0-eb042becbf5c-scripts\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.848475 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-run-ovn\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.848753 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-log-ovn\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.848904 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-run\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.849709 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d3a068dc-2303-4103-bbc0-eb042becbf5c-additional-scripts\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.850534 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3a068dc-2303-4103-bbc0-eb042becbf5c-scripts\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.873463 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/d3a068dc-2303-4103-bbc0-eb042becbf5c-kube-api-access-zhc87\") pod \"ovn-controller-s4g2z-config-hf748\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:45 crc kubenswrapper[4823]: I0126 15:05:45.922510 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:46 crc kubenswrapper[4823]: I0126 15:05:46.399681 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-s4g2z-config-hf748"] Jan 26 15:05:47 crc kubenswrapper[4823]: I0126 15:05:47.187123 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-84q84"] Jan 26 15:05:47 crc kubenswrapper[4823]: I0126 15:05:47.193655 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-84q84"] Jan 26 15:05:47 crc kubenswrapper[4823]: I0126 15:05:47.392623 4823 generic.go:334] "Generic (PLEG): container finished" podID="c43c52fb-3ef3-4d3e-984d-642a9bc09469" containerID="d324e4498e25c790364c529d8ff7c5a42be04ccc727f54417de05094a26b7b1f" exitCode=0 Jan 26 15:05:47 crc kubenswrapper[4823]: I0126 15:05:47.392743 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c43c52fb-3ef3-4d3e-984d-642a9bc09469","Type":"ContainerDied","Data":"d324e4498e25c790364c529d8ff7c5a42be04ccc727f54417de05094a26b7b1f"} Jan 26 15:05:47 crc kubenswrapper[4823]: I0126 15:05:47.397545 4823 generic.go:334] "Generic (PLEG): container finished" podID="d3a068dc-2303-4103-bbc0-eb042becbf5c" containerID="04bd6c70314d9629c420f6379a2420c34cc3d405d46c54841b21f2ec2e5089a5" exitCode=0 Jan 26 15:05:47 crc kubenswrapper[4823]: I0126 15:05:47.397768 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s4g2z-config-hf748" event={"ID":"d3a068dc-2303-4103-bbc0-eb042becbf5c","Type":"ContainerDied","Data":"04bd6c70314d9629c420f6379a2420c34cc3d405d46c54841b21f2ec2e5089a5"} Jan 26 15:05:47 crc kubenswrapper[4823]: I0126 15:05:47.397842 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s4g2z-config-hf748" event={"ID":"d3a068dc-2303-4103-bbc0-eb042becbf5c","Type":"ContainerStarted","Data":"b1aedd7daee31e9f9dc393ec94e62473e168a3259468e0b0c91ca462e0523cd6"} Jan 26 15:05:47 crc kubenswrapper[4823]: I0126 15:05:47.400677 4823 generic.go:334] "Generic (PLEG): container finished" podID="a82c17e1-38ac-4448-b3ff-b18df77c521b" containerID="4b2034ce41d61eb076d22a82c04cc9cf553fcbec011d783b6bb86deedba49bf3" exitCode=0 Jan 26 15:05:47 crc kubenswrapper[4823]: I0126 15:05:47.400724 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a82c17e1-38ac-4448-b3ff-b18df77c521b","Type":"ContainerDied","Data":"4b2034ce41d61eb076d22a82c04cc9cf553fcbec011d783b6bb86deedba49bf3"} Jan 26 15:05:47 crc kubenswrapper[4823]: I0126 15:05:47.578353 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c541d2f3-f29a-4151-9fb4-031b967b8969" path="/var/lib/kubelet/pods/c541d2f3-f29a-4151-9fb4-031b967b8969/volumes" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.416014 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a82c17e1-38ac-4448-b3ff-b18df77c521b","Type":"ContainerStarted","Data":"9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df"} Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.416321 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.421192 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c43c52fb-3ef3-4d3e-984d-642a9bc09469","Type":"ContainerStarted","Data":"ed59d9bf4c7e8e5a1a8e23c753100b50e0bc2d0528d6eae294a01d96973d87b8"} Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.421495 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.458037 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.663161264 podStartE2EDuration="1m24.458009033s" podCreationTimestamp="2026-01-26 15:04:24 +0000 UTC" firstStartedPulling="2026-01-26 15:04:30.782540772 +0000 UTC m=+1067.468003877" lastFinishedPulling="2026-01-26 15:05:12.577388541 +0000 UTC m=+1109.262851646" observedRunningTime="2026-01-26 15:05:48.450815766 +0000 UTC m=+1145.136278871" watchObservedRunningTime="2026-01-26 15:05:48.458009033 +0000 UTC m=+1145.143472138" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.500153 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.70040495 podStartE2EDuration="1m24.500124262s" podCreationTimestamp="2026-01-26 15:04:24 +0000 UTC" firstStartedPulling="2026-01-26 15:04:30.777583587 +0000 UTC m=+1067.463046702" lastFinishedPulling="2026-01-26 15:05:12.577302909 +0000 UTC m=+1109.262766014" observedRunningTime="2026-01-26 15:05:48.489285996 +0000 UTC m=+1145.174749121" watchObservedRunningTime="2026-01-26 15:05:48.500124262 +0000 UTC m=+1145.185587377" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.761292 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.922865 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-run\") pod \"d3a068dc-2303-4103-bbc0-eb042becbf5c\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.922983 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-run-ovn\") pod \"d3a068dc-2303-4103-bbc0-eb042becbf5c\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.923057 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d3a068dc-2303-4103-bbc0-eb042becbf5c" (UID: "d3a068dc-2303-4103-bbc0-eb042becbf5c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.923076 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-run" (OuterVolumeSpecName: "var-run") pod "d3a068dc-2303-4103-bbc0-eb042becbf5c" (UID: "d3a068dc-2303-4103-bbc0-eb042becbf5c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.923126 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3a068dc-2303-4103-bbc0-eb042becbf5c-scripts\") pod \"d3a068dc-2303-4103-bbc0-eb042becbf5c\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.923179 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d3a068dc-2303-4103-bbc0-eb042becbf5c-additional-scripts\") pod \"d3a068dc-2303-4103-bbc0-eb042becbf5c\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.923236 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-log-ovn\") pod \"d3a068dc-2303-4103-bbc0-eb042becbf5c\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.923290 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/d3a068dc-2303-4103-bbc0-eb042becbf5c-kube-api-access-zhc87\") pod \"d3a068dc-2303-4103-bbc0-eb042becbf5c\" (UID: \"d3a068dc-2303-4103-bbc0-eb042becbf5c\") " Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.923512 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d3a068dc-2303-4103-bbc0-eb042becbf5c" (UID: "d3a068dc-2303-4103-bbc0-eb042becbf5c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.923792 4823 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.923820 4823 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-run\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.923833 4823 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d3a068dc-2303-4103-bbc0-eb042becbf5c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.924245 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a068dc-2303-4103-bbc0-eb042becbf5c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d3a068dc-2303-4103-bbc0-eb042becbf5c" (UID: "d3a068dc-2303-4103-bbc0-eb042becbf5c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.924729 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a068dc-2303-4103-bbc0-eb042becbf5c-scripts" (OuterVolumeSpecName: "scripts") pod "d3a068dc-2303-4103-bbc0-eb042becbf5c" (UID: "d3a068dc-2303-4103-bbc0-eb042becbf5c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:05:48 crc kubenswrapper[4823]: I0126 15:05:48.929565 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a068dc-2303-4103-bbc0-eb042becbf5c-kube-api-access-zhc87" (OuterVolumeSpecName: "kube-api-access-zhc87") pod "d3a068dc-2303-4103-bbc0-eb042becbf5c" (UID: "d3a068dc-2303-4103-bbc0-eb042becbf5c"). InnerVolumeSpecName "kube-api-access-zhc87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:05:49 crc kubenswrapper[4823]: I0126 15:05:49.025525 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3a068dc-2303-4103-bbc0-eb042becbf5c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:49 crc kubenswrapper[4823]: I0126 15:05:49.025569 4823 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d3a068dc-2303-4103-bbc0-eb042becbf5c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:49 crc kubenswrapper[4823]: I0126 15:05:49.025584 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhc87\" (UniqueName: \"kubernetes.io/projected/d3a068dc-2303-4103-bbc0-eb042becbf5c-kube-api-access-zhc87\") on node \"crc\" DevicePath \"\"" Jan 26 15:05:49 crc kubenswrapper[4823]: I0126 15:05:49.443948 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-s4g2z-config-hf748" event={"ID":"d3a068dc-2303-4103-bbc0-eb042becbf5c","Type":"ContainerDied","Data":"b1aedd7daee31e9f9dc393ec94e62473e168a3259468e0b0c91ca462e0523cd6"} Jan 26 15:05:49 crc kubenswrapper[4823]: I0126 15:05:49.444502 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1aedd7daee31e9f9dc393ec94e62473e168a3259468e0b0c91ca462e0523cd6" Jan 26 15:05:49 crc kubenswrapper[4823]: I0126 15:05:49.444118 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-s4g2z-config-hf748" Jan 26 15:05:49 crc kubenswrapper[4823]: I0126 15:05:49.880417 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-s4g2z-config-hf748"] Jan 26 15:05:49 crc kubenswrapper[4823]: I0126 15:05:49.893483 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-s4g2z-config-hf748"] Jan 26 15:05:50 crc kubenswrapper[4823]: I0126 15:05:50.355532 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-s4g2z" Jan 26 15:05:51 crc kubenswrapper[4823]: I0126 15:05:51.573786 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a068dc-2303-4103-bbc0-eb042becbf5c" path="/var/lib/kubelet/pods/d3a068dc-2303-4103-bbc0-eb042becbf5c/volumes" Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.197997 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-xjz88"] Jan 26 15:05:52 crc kubenswrapper[4823]: E0126 15:05:52.198426 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a068dc-2303-4103-bbc0-eb042becbf5c" containerName="ovn-config" Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.198444 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a068dc-2303-4103-bbc0-eb042becbf5c" containerName="ovn-config" Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.198621 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a068dc-2303-4103-bbc0-eb042becbf5c" containerName="ovn-config" Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.199219 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xjz88" Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.202429 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.210913 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xjz88"] Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.306488 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwcf7\" (UniqueName: \"kubernetes.io/projected/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee-kube-api-access-gwcf7\") pod \"root-account-create-update-xjz88\" (UID: \"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee\") " pod="openstack/root-account-create-update-xjz88" Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.306575 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee-operator-scripts\") pod \"root-account-create-update-xjz88\" (UID: \"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee\") " pod="openstack/root-account-create-update-xjz88" Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.411263 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee-operator-scripts\") pod \"root-account-create-update-xjz88\" (UID: \"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee\") " pod="openstack/root-account-create-update-xjz88" Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.411675 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwcf7\" (UniqueName: \"kubernetes.io/projected/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee-kube-api-access-gwcf7\") pod \"root-account-create-update-xjz88\" (UID: \"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee\") " pod="openstack/root-account-create-update-xjz88" Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.412255 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee-operator-scripts\") pod \"root-account-create-update-xjz88\" (UID: \"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee\") " pod="openstack/root-account-create-update-xjz88" Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.456481 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwcf7\" (UniqueName: \"kubernetes.io/projected/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee-kube-api-access-gwcf7\") pod \"root-account-create-update-xjz88\" (UID: \"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee\") " pod="openstack/root-account-create-update-xjz88" Jan 26 15:05:52 crc kubenswrapper[4823]: I0126 15:05:52.528290 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xjz88" Jan 26 15:06:00 crc kubenswrapper[4823]: I0126 15:06:00.707211 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-xjz88"] Jan 26 15:06:01 crc kubenswrapper[4823]: I0126 15:06:01.575101 4823 generic.go:334] "Generic (PLEG): container finished" podID="3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee" containerID="112560be173f448e111d3a3f526c688af0e81d8d2843ed84641543d53d69277f" exitCode=0 Jan 26 15:06:01 crc kubenswrapper[4823]: I0126 15:06:01.578750 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xjz88" event={"ID":"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee","Type":"ContainerDied","Data":"112560be173f448e111d3a3f526c688af0e81d8d2843ed84641543d53d69277f"} Jan 26 15:06:01 crc kubenswrapper[4823]: I0126 15:06:01.578807 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xjz88" event={"ID":"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee","Type":"ContainerStarted","Data":"412e8fdfe05308f12b2b233a59d01c3f488ea78af856a7d07a48422b5c563fc4"} Jan 26 15:06:01 crc kubenswrapper[4823]: I0126 15:06:01.578822 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gp7n5" event={"ID":"461e74af-b7a9-4451-a07d-42f47a806286","Type":"ContainerStarted","Data":"609c61bad94595b78b905ced3dc30429d010fd1a0abaff984a6aad556b09de3f"} Jan 26 15:06:02 crc kubenswrapper[4823]: I0126 15:06:02.973087 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xjz88" Jan 26 15:06:02 crc kubenswrapper[4823]: I0126 15:06:02.994177 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-gp7n5" podStartSLOduration=3.88785822 podStartE2EDuration="18.994156068s" podCreationTimestamp="2026-01-26 15:05:44 +0000 UTC" firstStartedPulling="2026-01-26 15:05:45.165582191 +0000 UTC m=+1141.851045296" lastFinishedPulling="2026-01-26 15:06:00.271880039 +0000 UTC m=+1156.957343144" observedRunningTime="2026-01-26 15:06:01.623100882 +0000 UTC m=+1158.308563997" watchObservedRunningTime="2026-01-26 15:06:02.994156068 +0000 UTC m=+1159.679619173" Jan 26 15:06:03 crc kubenswrapper[4823]: I0126 15:06:03.118756 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee-operator-scripts\") pod \"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee\" (UID: \"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee\") " Jan 26 15:06:03 crc kubenswrapper[4823]: I0126 15:06:03.118807 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwcf7\" (UniqueName: \"kubernetes.io/projected/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee-kube-api-access-gwcf7\") pod \"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee\" (UID: \"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee\") " Jan 26 15:06:03 crc kubenswrapper[4823]: I0126 15:06:03.119955 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee" (UID: "3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:03 crc kubenswrapper[4823]: I0126 15:06:03.130101 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee-kube-api-access-gwcf7" (OuterVolumeSpecName: "kube-api-access-gwcf7") pod "3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee" (UID: "3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee"). InnerVolumeSpecName "kube-api-access-gwcf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:03 crc kubenswrapper[4823]: I0126 15:06:03.220767 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:03 crc kubenswrapper[4823]: I0126 15:06:03.220816 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwcf7\" (UniqueName: \"kubernetes.io/projected/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee-kube-api-access-gwcf7\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:03 crc kubenswrapper[4823]: I0126 15:06:03.597708 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-xjz88" event={"ID":"3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee","Type":"ContainerDied","Data":"412e8fdfe05308f12b2b233a59d01c3f488ea78af856a7d07a48422b5c563fc4"} Jan 26 15:06:03 crc kubenswrapper[4823]: I0126 15:06:03.597770 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="412e8fdfe05308f12b2b233a59d01c3f488ea78af856a7d07a48422b5c563fc4" Jan 26 15:06:03 crc kubenswrapper[4823]: I0126 15:06:03.597877 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-xjz88" Jan 26 15:06:04 crc kubenswrapper[4823]: I0126 15:06:04.508911 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:06:04 crc kubenswrapper[4823]: I0126 15:06:04.509001 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:06:05 crc kubenswrapper[4823]: I0126 15:06:05.678925 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 15:06:05 crc kubenswrapper[4823]: I0126 15:06:05.988901 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-7wvns"] Jan 26 15:06:05 crc kubenswrapper[4823]: E0126 15:06:05.989447 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee" containerName="mariadb-account-create-update" Jan 26 15:06:05 crc kubenswrapper[4823]: I0126 15:06:05.989475 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee" containerName="mariadb-account-create-update" Jan 26 15:06:05 crc kubenswrapper[4823]: I0126 15:06:05.989667 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee" containerName="mariadb-account-create-update" Jan 26 15:06:05 crc kubenswrapper[4823]: I0126 15:06:05.990406 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7wvns" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.000733 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7wvns"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.081642 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.081810 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-8rqmc"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.083701 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8rqmc" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.092533 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5021-account-create-update-mpthz"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.094015 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5021-account-create-update-mpthz" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.097034 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.099962 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8rqmc"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.127342 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736-operator-scripts\") pod \"barbican-db-create-7wvns\" (UID: \"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736\") " pod="openstack/barbican-db-create-7wvns" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.127460 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkdnr\" (UniqueName: \"kubernetes.io/projected/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736-kube-api-access-jkdnr\") pod \"barbican-db-create-7wvns\" (UID: \"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736\") " pod="openstack/barbican-db-create-7wvns" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.171033 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5021-account-create-update-mpthz"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.239258 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92f40fd5-6264-4e1c-a0ff-94f71a0d994c-operator-scripts\") pod \"cinder-db-create-8rqmc\" (UID: \"92f40fd5-6264-4e1c-a0ff-94f71a0d994c\") " pod="openstack/cinder-db-create-8rqmc" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.239335 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnsw9\" (UniqueName: \"kubernetes.io/projected/92f40fd5-6264-4e1c-a0ff-94f71a0d994c-kube-api-access-nnsw9\") pod \"cinder-db-create-8rqmc\" (UID: \"92f40fd5-6264-4e1c-a0ff-94f71a0d994c\") " pod="openstack/cinder-db-create-8rqmc" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.239376 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736-operator-scripts\") pod \"barbican-db-create-7wvns\" (UID: \"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736\") " pod="openstack/barbican-db-create-7wvns" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.239464 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr2cr\" (UniqueName: \"kubernetes.io/projected/e4df5511-77f2-4005-9179-933a42374141-kube-api-access-rr2cr\") pod \"cinder-5021-account-create-update-mpthz\" (UID: \"e4df5511-77f2-4005-9179-933a42374141\") " pod="openstack/cinder-5021-account-create-update-mpthz" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.239496 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4df5511-77f2-4005-9179-933a42374141-operator-scripts\") pod \"cinder-5021-account-create-update-mpthz\" (UID: \"e4df5511-77f2-4005-9179-933a42374141\") " pod="openstack/cinder-5021-account-create-update-mpthz" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.239523 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkdnr\" (UniqueName: \"kubernetes.io/projected/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736-kube-api-access-jkdnr\") pod \"barbican-db-create-7wvns\" (UID: \"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736\") " pod="openstack/barbican-db-create-7wvns" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.241757 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736-operator-scripts\") pod \"barbican-db-create-7wvns\" (UID: \"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736\") " pod="openstack/barbican-db-create-7wvns" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.244722 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-f352-account-create-update-fzwc2"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.245920 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f352-account-create-update-fzwc2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.247963 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.258555 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f352-account-create-update-fzwc2"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.277578 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkdnr\" (UniqueName: \"kubernetes.io/projected/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736-kube-api-access-jkdnr\") pod \"barbican-db-create-7wvns\" (UID: \"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736\") " pod="openstack/barbican-db-create-7wvns" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.313146 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7wvns" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.340736 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnsw9\" (UniqueName: \"kubernetes.io/projected/92f40fd5-6264-4e1c-a0ff-94f71a0d994c-kube-api-access-nnsw9\") pod \"cinder-db-create-8rqmc\" (UID: \"92f40fd5-6264-4e1c-a0ff-94f71a0d994c\") " pod="openstack/cinder-db-create-8rqmc" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.340808 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/677c0fed-1e1f-4155-95ee-86291a16effa-operator-scripts\") pod \"barbican-f352-account-create-update-fzwc2\" (UID: \"677c0fed-1e1f-4155-95ee-86291a16effa\") " pod="openstack/barbican-f352-account-create-update-fzwc2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.340877 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rr2cr\" (UniqueName: \"kubernetes.io/projected/e4df5511-77f2-4005-9179-933a42374141-kube-api-access-rr2cr\") pod \"cinder-5021-account-create-update-mpthz\" (UID: \"e4df5511-77f2-4005-9179-933a42374141\") " pod="openstack/cinder-5021-account-create-update-mpthz" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.340908 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4df5511-77f2-4005-9179-933a42374141-operator-scripts\") pod \"cinder-5021-account-create-update-mpthz\" (UID: \"e4df5511-77f2-4005-9179-933a42374141\") " pod="openstack/cinder-5021-account-create-update-mpthz" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.340937 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qshl\" (UniqueName: \"kubernetes.io/projected/677c0fed-1e1f-4155-95ee-86291a16effa-kube-api-access-7qshl\") pod \"barbican-f352-account-create-update-fzwc2\" (UID: \"677c0fed-1e1f-4155-95ee-86291a16effa\") " pod="openstack/barbican-f352-account-create-update-fzwc2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.340991 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92f40fd5-6264-4e1c-a0ff-94f71a0d994c-operator-scripts\") pod \"cinder-db-create-8rqmc\" (UID: \"92f40fd5-6264-4e1c-a0ff-94f71a0d994c\") " pod="openstack/cinder-db-create-8rqmc" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.341792 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92f40fd5-6264-4e1c-a0ff-94f71a0d994c-operator-scripts\") pod \"cinder-db-create-8rqmc\" (UID: \"92f40fd5-6264-4e1c-a0ff-94f71a0d994c\") " pod="openstack/cinder-db-create-8rqmc" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.342763 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4df5511-77f2-4005-9179-933a42374141-operator-scripts\") pod \"cinder-5021-account-create-update-mpthz\" (UID: \"e4df5511-77f2-4005-9179-933a42374141\") " pod="openstack/cinder-5021-account-create-update-mpthz" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.370656 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnsw9\" (UniqueName: \"kubernetes.io/projected/92f40fd5-6264-4e1c-a0ff-94f71a0d994c-kube-api-access-nnsw9\") pod \"cinder-db-create-8rqmc\" (UID: \"92f40fd5-6264-4e1c-a0ff-94f71a0d994c\") " pod="openstack/cinder-db-create-8rqmc" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.389954 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-h7c79"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.402748 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr2cr\" (UniqueName: \"kubernetes.io/projected/e4df5511-77f2-4005-9179-933a42374141-kube-api-access-rr2cr\") pod \"cinder-5021-account-create-update-mpthz\" (UID: \"e4df5511-77f2-4005-9179-933a42374141\") " pod="openstack/cinder-5021-account-create-update-mpthz" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.403934 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8rqmc" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.410730 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-h7c79"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.410879 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.416852 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.417104 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vkwn7" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.417238 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.417358 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.429153 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5021-account-create-update-mpthz" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.443110 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/677c0fed-1e1f-4155-95ee-86291a16effa-operator-scripts\") pod \"barbican-f352-account-create-update-fzwc2\" (UID: \"677c0fed-1e1f-4155-95ee-86291a16effa\") " pod="openstack/barbican-f352-account-create-update-fzwc2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.443247 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qshl\" (UniqueName: \"kubernetes.io/projected/677c0fed-1e1f-4155-95ee-86291a16effa-kube-api-access-7qshl\") pod \"barbican-f352-account-create-update-fzwc2\" (UID: \"677c0fed-1e1f-4155-95ee-86291a16effa\") " pod="openstack/barbican-f352-account-create-update-fzwc2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.443697 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-39a1-account-create-update-46zz2"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.444166 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/677c0fed-1e1f-4155-95ee-86291a16effa-operator-scripts\") pod \"barbican-f352-account-create-update-fzwc2\" (UID: \"677c0fed-1e1f-4155-95ee-86291a16effa\") " pod="openstack/barbican-f352-account-create-update-fzwc2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.444865 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-39a1-account-create-update-46zz2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.459170 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.467634 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-39a1-account-create-update-46zz2"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.514387 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qshl\" (UniqueName: \"kubernetes.io/projected/677c0fed-1e1f-4155-95ee-86291a16effa-kube-api-access-7qshl\") pod \"barbican-f352-account-create-update-fzwc2\" (UID: \"677c0fed-1e1f-4155-95ee-86291a16effa\") " pod="openstack/barbican-f352-account-create-update-fzwc2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.524532 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-fzqj6"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.527473 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fzqj6" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.544754 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-combined-ca-bundle\") pod \"keystone-db-sync-h7c79\" (UID: \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\") " pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.544856 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-config-data\") pod \"keystone-db-sync-h7c79\" (UID: \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\") " pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.544894 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkpth\" (UniqueName: \"kubernetes.io/projected/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6-kube-api-access-zkpth\") pod \"neutron-39a1-account-create-update-46zz2\" (UID: \"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6\") " pod="openstack/neutron-39a1-account-create-update-46zz2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.544941 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjw8l\" (UniqueName: \"kubernetes.io/projected/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-kube-api-access-vjw8l\") pod \"keystone-db-sync-h7c79\" (UID: \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\") " pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.544988 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6-operator-scripts\") pod \"neutron-39a1-account-create-update-46zz2\" (UID: \"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6\") " pod="openstack/neutron-39a1-account-create-update-46zz2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.564510 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fzqj6"] Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.603870 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f352-account-create-update-fzwc2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.647512 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6-operator-scripts\") pod \"neutron-39a1-account-create-update-46zz2\" (UID: \"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6\") " pod="openstack/neutron-39a1-account-create-update-46zz2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.647638 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-combined-ca-bundle\") pod \"keystone-db-sync-h7c79\" (UID: \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\") " pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.647726 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-config-data\") pod \"keystone-db-sync-h7c79\" (UID: \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\") " pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.647776 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c1a3789-de6b-4030-ab64-a9f504133124-operator-scripts\") pod \"neutron-db-create-fzqj6\" (UID: \"0c1a3789-de6b-4030-ab64-a9f504133124\") " pod="openstack/neutron-db-create-fzqj6" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.647812 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkpth\" (UniqueName: \"kubernetes.io/projected/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6-kube-api-access-zkpth\") pod \"neutron-39a1-account-create-update-46zz2\" (UID: \"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6\") " pod="openstack/neutron-39a1-account-create-update-46zz2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.647887 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfwxs\" (UniqueName: \"kubernetes.io/projected/0c1a3789-de6b-4030-ab64-a9f504133124-kube-api-access-hfwxs\") pod \"neutron-db-create-fzqj6\" (UID: \"0c1a3789-de6b-4030-ab64-a9f504133124\") " pod="openstack/neutron-db-create-fzqj6" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.647912 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjw8l\" (UniqueName: \"kubernetes.io/projected/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-kube-api-access-vjw8l\") pod \"keystone-db-sync-h7c79\" (UID: \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\") " pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.649737 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6-operator-scripts\") pod \"neutron-39a1-account-create-update-46zz2\" (UID: \"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6\") " pod="openstack/neutron-39a1-account-create-update-46zz2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.655738 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-combined-ca-bundle\") pod \"keystone-db-sync-h7c79\" (UID: \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\") " pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.656922 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-config-data\") pod \"keystone-db-sync-h7c79\" (UID: \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\") " pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.678345 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjw8l\" (UniqueName: \"kubernetes.io/projected/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-kube-api-access-vjw8l\") pod \"keystone-db-sync-h7c79\" (UID: \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\") " pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.680018 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkpth\" (UniqueName: \"kubernetes.io/projected/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6-kube-api-access-zkpth\") pod \"neutron-39a1-account-create-update-46zz2\" (UID: \"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6\") " pod="openstack/neutron-39a1-account-create-update-46zz2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.750225 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c1a3789-de6b-4030-ab64-a9f504133124-operator-scripts\") pod \"neutron-db-create-fzqj6\" (UID: \"0c1a3789-de6b-4030-ab64-a9f504133124\") " pod="openstack/neutron-db-create-fzqj6" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.750705 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfwxs\" (UniqueName: \"kubernetes.io/projected/0c1a3789-de6b-4030-ab64-a9f504133124-kube-api-access-hfwxs\") pod \"neutron-db-create-fzqj6\" (UID: \"0c1a3789-de6b-4030-ab64-a9f504133124\") " pod="openstack/neutron-db-create-fzqj6" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.751603 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c1a3789-de6b-4030-ab64-a9f504133124-operator-scripts\") pod \"neutron-db-create-fzqj6\" (UID: \"0c1a3789-de6b-4030-ab64-a9f504133124\") " pod="openstack/neutron-db-create-fzqj6" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.769961 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfwxs\" (UniqueName: \"kubernetes.io/projected/0c1a3789-de6b-4030-ab64-a9f504133124-kube-api-access-hfwxs\") pod \"neutron-db-create-fzqj6\" (UID: \"0c1a3789-de6b-4030-ab64-a9f504133124\") " pod="openstack/neutron-db-create-fzqj6" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.818879 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.835133 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-39a1-account-create-update-46zz2" Jan 26 15:06:06 crc kubenswrapper[4823]: I0126 15:06:06.862685 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fzqj6" Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.012934 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7wvns"] Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.103027 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5021-account-create-update-mpthz"] Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.165963 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-8rqmc"] Jan 26 15:06:07 crc kubenswrapper[4823]: W0126 15:06:07.176725 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92f40fd5_6264_4e1c_a0ff_94f71a0d994c.slice/crio-b159b785d227e5c516704d3af90f32f3ca735fdcc96594e336f6bb90e6792a44 WatchSource:0}: Error finding container b159b785d227e5c516704d3af90f32f3ca735fdcc96594e336f6bb90e6792a44: Status 404 returned error can't find the container with id b159b785d227e5c516704d3af90f32f3ca735fdcc96594e336f6bb90e6792a44 Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.289867 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-f352-account-create-update-fzwc2"] Jan 26 15:06:07 crc kubenswrapper[4823]: W0126 15:06:07.295316 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod677c0fed_1e1f_4155_95ee_86291a16effa.slice/crio-b1a7c940d1c563917bcd39222e2a22dde7569112aac34844980c2acf09c56486 WatchSource:0}: Error finding container b1a7c940d1c563917bcd39222e2a22dde7569112aac34844980c2acf09c56486: Status 404 returned error can't find the container with id b1a7c940d1c563917bcd39222e2a22dde7569112aac34844980c2acf09c56486 Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.392401 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-39a1-account-create-update-46zz2"] Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.413048 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-fzqj6"] Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.465228 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-h7c79"] Jan 26 15:06:07 crc kubenswrapper[4823]: W0126 15:06:07.479018 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc20b6e53_0f09_4af8_8d2b_02c1d50e3730.slice/crio-8d01df6fa5e9154d0b5b447e3592dca8cb23f80a23381f9374995767ce95c86c WatchSource:0}: Error finding container 8d01df6fa5e9154d0b5b447e3592dca8cb23f80a23381f9374995767ce95c86c: Status 404 returned error can't find the container with id 8d01df6fa5e9154d0b5b447e3592dca8cb23f80a23381f9374995767ce95c86c Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.654128 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5021-account-create-update-mpthz" event={"ID":"e4df5511-77f2-4005-9179-933a42374141","Type":"ContainerStarted","Data":"21879019d045fc133189ab18c89f0b4a011c4162f170d998769cf5821288791b"} Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.654192 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5021-account-create-update-mpthz" event={"ID":"e4df5511-77f2-4005-9179-933a42374141","Type":"ContainerStarted","Data":"3f1cff762079a8d34a97c527a7b5f746ed3513f0390ed60e147a9533bf3d4ece"} Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.659152 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h7c79" event={"ID":"c20b6e53-0f09-4af8-8d2b-02c1d50e3730","Type":"ContainerStarted","Data":"8d01df6fa5e9154d0b5b447e3592dca8cb23f80a23381f9374995767ce95c86c"} Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.677012 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fzqj6" event={"ID":"0c1a3789-de6b-4030-ab64-a9f504133124","Type":"ContainerStarted","Data":"214b02c4f683851a0a1e18db3b21a3f55a562742c7455e97c2cc27567c220d25"} Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.682135 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7wvns" event={"ID":"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736","Type":"ContainerStarted","Data":"653aca22ea7335d198580d40a1e9271aeb3f5ad2e813b7743b369da49739b642"} Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.682200 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7wvns" event={"ID":"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736","Type":"ContainerStarted","Data":"e0bfd6f7bc1dfa07adf7a668622b5b1fe8a22c5bbeac763914af6e0abe01c63b"} Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.695105 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-39a1-account-create-update-46zz2" event={"ID":"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6","Type":"ContainerStarted","Data":"67959d134404df6a0b99776aa10b0d20df484160099ac280233af17e669aa677"} Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.698504 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-5021-account-create-update-mpthz" podStartSLOduration=1.698463338 podStartE2EDuration="1.698463338s" podCreationTimestamp="2026-01-26 15:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:06:07.678059501 +0000 UTC m=+1164.363522606" watchObservedRunningTime="2026-01-26 15:06:07.698463338 +0000 UTC m=+1164.383926443" Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.706485 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8rqmc" event={"ID":"92f40fd5-6264-4e1c-a0ff-94f71a0d994c","Type":"ContainerStarted","Data":"59e57f5606f7c786d0d950e89e03a9153bb90c3a4aef2df8a2b31c2e10e0f846"} Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.706571 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8rqmc" event={"ID":"92f40fd5-6264-4e1c-a0ff-94f71a0d994c","Type":"ContainerStarted","Data":"b159b785d227e5c516704d3af90f32f3ca735fdcc96594e336f6bb90e6792a44"} Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.717726 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f352-account-create-update-fzwc2" event={"ID":"677c0fed-1e1f-4155-95ee-86291a16effa","Type":"ContainerStarted","Data":"12096333952f0dd8c96cb162ae9d50ed9b683b0f58988ffc754e393865453295"} Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.718121 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f352-account-create-update-fzwc2" event={"ID":"677c0fed-1e1f-4155-95ee-86291a16effa","Type":"ContainerStarted","Data":"b1a7c940d1c563917bcd39222e2a22dde7569112aac34844980c2acf09c56486"} Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.788003 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-8rqmc" podStartSLOduration=1.787974121 podStartE2EDuration="1.787974121s" podCreationTimestamp="2026-01-26 15:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:06:07.731319185 +0000 UTC m=+1164.416782300" watchObservedRunningTime="2026-01-26 15:06:07.787974121 +0000 UTC m=+1164.473437226" Jan 26 15:06:07 crc kubenswrapper[4823]: I0126 15:06:07.791299 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-f352-account-create-update-fzwc2" podStartSLOduration=1.791290972 podStartE2EDuration="1.791290972s" podCreationTimestamp="2026-01-26 15:06:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:06:07.766274309 +0000 UTC m=+1164.451737424" watchObservedRunningTime="2026-01-26 15:06:07.791290972 +0000 UTC m=+1164.476754077" Jan 26 15:06:08 crc kubenswrapper[4823]: I0126 15:06:08.740429 4823 generic.go:334] "Generic (PLEG): container finished" podID="92f40fd5-6264-4e1c-a0ff-94f71a0d994c" containerID="59e57f5606f7c786d0d950e89e03a9153bb90c3a4aef2df8a2b31c2e10e0f846" exitCode=0 Jan 26 15:06:08 crc kubenswrapper[4823]: I0126 15:06:08.740700 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8rqmc" event={"ID":"92f40fd5-6264-4e1c-a0ff-94f71a0d994c","Type":"ContainerDied","Data":"59e57f5606f7c786d0d950e89e03a9153bb90c3a4aef2df8a2b31c2e10e0f846"} Jan 26 15:06:08 crc kubenswrapper[4823]: I0126 15:06:08.754029 4823 generic.go:334] "Generic (PLEG): container finished" podID="677c0fed-1e1f-4155-95ee-86291a16effa" containerID="12096333952f0dd8c96cb162ae9d50ed9b683b0f58988ffc754e393865453295" exitCode=0 Jan 26 15:06:08 crc kubenswrapper[4823]: I0126 15:06:08.754196 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f352-account-create-update-fzwc2" event={"ID":"677c0fed-1e1f-4155-95ee-86291a16effa","Type":"ContainerDied","Data":"12096333952f0dd8c96cb162ae9d50ed9b683b0f58988ffc754e393865453295"} Jan 26 15:06:08 crc kubenswrapper[4823]: I0126 15:06:08.763898 4823 generic.go:334] "Generic (PLEG): container finished" podID="e4df5511-77f2-4005-9179-933a42374141" containerID="21879019d045fc133189ab18c89f0b4a011c4162f170d998769cf5821288791b" exitCode=0 Jan 26 15:06:08 crc kubenswrapper[4823]: I0126 15:06:08.764003 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5021-account-create-update-mpthz" event={"ID":"e4df5511-77f2-4005-9179-933a42374141","Type":"ContainerDied","Data":"21879019d045fc133189ab18c89f0b4a011c4162f170d998769cf5821288791b"} Jan 26 15:06:08 crc kubenswrapper[4823]: I0126 15:06:08.765835 4823 generic.go:334] "Generic (PLEG): container finished" podID="0c1a3789-de6b-4030-ab64-a9f504133124" containerID="2b43a4cafb4611e599baf8abf6a3faa08c08c49454c7a4966390ccd4cdf30156" exitCode=0 Jan 26 15:06:08 crc kubenswrapper[4823]: I0126 15:06:08.765894 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fzqj6" event={"ID":"0c1a3789-de6b-4030-ab64-a9f504133124","Type":"ContainerDied","Data":"2b43a4cafb4611e599baf8abf6a3faa08c08c49454c7a4966390ccd4cdf30156"} Jan 26 15:06:08 crc kubenswrapper[4823]: I0126 15:06:08.772204 4823 generic.go:334] "Generic (PLEG): container finished" podID="a8e91b70-4f0c-4abc-bbb5-c7f75dc94736" containerID="653aca22ea7335d198580d40a1e9271aeb3f5ad2e813b7743b369da49739b642" exitCode=0 Jan 26 15:06:08 crc kubenswrapper[4823]: I0126 15:06:08.772317 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7wvns" event={"ID":"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736","Type":"ContainerDied","Data":"653aca22ea7335d198580d40a1e9271aeb3f5ad2e813b7743b369da49739b642"} Jan 26 15:06:08 crc kubenswrapper[4823]: I0126 15:06:08.782635 4823 generic.go:334] "Generic (PLEG): container finished" podID="8c7d2689-33ea-47e3-ae2a-ad3b80f526b6" containerID="61e98fa470a3788913eade183aa51901c129b80e1df8aa8cfc6dcd0643ab2ae2" exitCode=0 Jan 26 15:06:08 crc kubenswrapper[4823]: I0126 15:06:08.782720 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-39a1-account-create-update-46zz2" event={"ID":"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6","Type":"ContainerDied","Data":"61e98fa470a3788913eade183aa51901c129b80e1df8aa8cfc6dcd0643ab2ae2"} Jan 26 15:06:09 crc kubenswrapper[4823]: I0126 15:06:09.250779 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7wvns" Jan 26 15:06:09 crc kubenswrapper[4823]: I0126 15:06:09.354890 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736-operator-scripts\") pod \"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736\" (UID: \"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736\") " Jan 26 15:06:09 crc kubenswrapper[4823]: I0126 15:06:09.355131 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkdnr\" (UniqueName: \"kubernetes.io/projected/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736-kube-api-access-jkdnr\") pod \"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736\" (UID: \"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736\") " Jan 26 15:06:09 crc kubenswrapper[4823]: I0126 15:06:09.356655 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a8e91b70-4f0c-4abc-bbb5-c7f75dc94736" (UID: "a8e91b70-4f0c-4abc-bbb5-c7f75dc94736"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:09 crc kubenswrapper[4823]: I0126 15:06:09.366828 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736-kube-api-access-jkdnr" (OuterVolumeSpecName: "kube-api-access-jkdnr") pod "a8e91b70-4f0c-4abc-bbb5-c7f75dc94736" (UID: "a8e91b70-4f0c-4abc-bbb5-c7f75dc94736"). InnerVolumeSpecName "kube-api-access-jkdnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:09 crc kubenswrapper[4823]: I0126 15:06:09.457204 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:09 crc kubenswrapper[4823]: I0126 15:06:09.457597 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkdnr\" (UniqueName: \"kubernetes.io/projected/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736-kube-api-access-jkdnr\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:09 crc kubenswrapper[4823]: I0126 15:06:09.795706 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7wvns" event={"ID":"a8e91b70-4f0c-4abc-bbb5-c7f75dc94736","Type":"ContainerDied","Data":"e0bfd6f7bc1dfa07adf7a668622b5b1fe8a22c5bbeac763914af6e0abe01c63b"} Jan 26 15:06:09 crc kubenswrapper[4823]: I0126 15:06:09.795766 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0bfd6f7bc1dfa07adf7a668622b5b1fe8a22c5bbeac763914af6e0abe01c63b" Jan 26 15:06:09 crc kubenswrapper[4823]: I0126 15:06:09.795730 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7wvns" Jan 26 15:06:09 crc kubenswrapper[4823]: I0126 15:06:09.798395 4823 generic.go:334] "Generic (PLEG): container finished" podID="461e74af-b7a9-4451-a07d-42f47a806286" containerID="609c61bad94595b78b905ced3dc30429d010fd1a0abaff984a6aad556b09de3f" exitCode=0 Jan 26 15:06:09 crc kubenswrapper[4823]: I0126 15:06:09.798596 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gp7n5" event={"ID":"461e74af-b7a9-4451-a07d-42f47a806286","Type":"ContainerDied","Data":"609c61bad94595b78b905ced3dc30429d010fd1a0abaff984a6aad556b09de3f"} Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.206680 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5021-account-create-update-mpthz" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.277585 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rr2cr\" (UniqueName: \"kubernetes.io/projected/e4df5511-77f2-4005-9179-933a42374141-kube-api-access-rr2cr\") pod \"e4df5511-77f2-4005-9179-933a42374141\" (UID: \"e4df5511-77f2-4005-9179-933a42374141\") " Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.278442 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4df5511-77f2-4005-9179-933a42374141-operator-scripts\") pod \"e4df5511-77f2-4005-9179-933a42374141\" (UID: \"e4df5511-77f2-4005-9179-933a42374141\") " Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.280303 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4df5511-77f2-4005-9179-933a42374141-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e4df5511-77f2-4005-9179-933a42374141" (UID: "e4df5511-77f2-4005-9179-933a42374141"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.284381 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4df5511-77f2-4005-9179-933a42374141-kube-api-access-rr2cr" (OuterVolumeSpecName: "kube-api-access-rr2cr") pod "e4df5511-77f2-4005-9179-933a42374141" (UID: "e4df5511-77f2-4005-9179-933a42374141"). InnerVolumeSpecName "kube-api-access-rr2cr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.339472 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f352-account-create-update-fzwc2" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.348140 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fzqj6" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.356080 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8rqmc" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.382896 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92f40fd5-6264-4e1c-a0ff-94f71a0d994c-operator-scripts\") pod \"92f40fd5-6264-4e1c-a0ff-94f71a0d994c\" (UID: \"92f40fd5-6264-4e1c-a0ff-94f71a0d994c\") " Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.383074 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qshl\" (UniqueName: \"kubernetes.io/projected/677c0fed-1e1f-4155-95ee-86291a16effa-kube-api-access-7qshl\") pod \"677c0fed-1e1f-4155-95ee-86291a16effa\" (UID: \"677c0fed-1e1f-4155-95ee-86291a16effa\") " Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.383144 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfwxs\" (UniqueName: \"kubernetes.io/projected/0c1a3789-de6b-4030-ab64-a9f504133124-kube-api-access-hfwxs\") pod \"0c1a3789-de6b-4030-ab64-a9f504133124\" (UID: \"0c1a3789-de6b-4030-ab64-a9f504133124\") " Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.383174 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnsw9\" (UniqueName: \"kubernetes.io/projected/92f40fd5-6264-4e1c-a0ff-94f71a0d994c-kube-api-access-nnsw9\") pod \"92f40fd5-6264-4e1c-a0ff-94f71a0d994c\" (UID: \"92f40fd5-6264-4e1c-a0ff-94f71a0d994c\") " Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.383203 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c1a3789-de6b-4030-ab64-a9f504133124-operator-scripts\") pod \"0c1a3789-de6b-4030-ab64-a9f504133124\" (UID: \"0c1a3789-de6b-4030-ab64-a9f504133124\") " Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.383252 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/677c0fed-1e1f-4155-95ee-86291a16effa-operator-scripts\") pod \"677c0fed-1e1f-4155-95ee-86291a16effa\" (UID: \"677c0fed-1e1f-4155-95ee-86291a16effa\") " Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.384465 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rr2cr\" (UniqueName: \"kubernetes.io/projected/e4df5511-77f2-4005-9179-933a42374141-kube-api-access-rr2cr\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.384486 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e4df5511-77f2-4005-9179-933a42374141-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.387106 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-39a1-account-create-update-46zz2" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.387273 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c1a3789-de6b-4030-ab64-a9f504133124-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c1a3789-de6b-4030-ab64-a9f504133124" (UID: "0c1a3789-de6b-4030-ab64-a9f504133124"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.388488 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92f40fd5-6264-4e1c-a0ff-94f71a0d994c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "92f40fd5-6264-4e1c-a0ff-94f71a0d994c" (UID: "92f40fd5-6264-4e1c-a0ff-94f71a0d994c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.392177 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/677c0fed-1e1f-4155-95ee-86291a16effa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "677c0fed-1e1f-4155-95ee-86291a16effa" (UID: "677c0fed-1e1f-4155-95ee-86291a16effa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.393753 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c1a3789-de6b-4030-ab64-a9f504133124-kube-api-access-hfwxs" (OuterVolumeSpecName: "kube-api-access-hfwxs") pod "0c1a3789-de6b-4030-ab64-a9f504133124" (UID: "0c1a3789-de6b-4030-ab64-a9f504133124"). InnerVolumeSpecName "kube-api-access-hfwxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.393791 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/677c0fed-1e1f-4155-95ee-86291a16effa-kube-api-access-7qshl" (OuterVolumeSpecName: "kube-api-access-7qshl") pod "677c0fed-1e1f-4155-95ee-86291a16effa" (UID: "677c0fed-1e1f-4155-95ee-86291a16effa"). InnerVolumeSpecName "kube-api-access-7qshl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.393874 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f40fd5-6264-4e1c-a0ff-94f71a0d994c-kube-api-access-nnsw9" (OuterVolumeSpecName: "kube-api-access-nnsw9") pod "92f40fd5-6264-4e1c-a0ff-94f71a0d994c" (UID: "92f40fd5-6264-4e1c-a0ff-94f71a0d994c"). InnerVolumeSpecName "kube-api-access-nnsw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.484919 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6-operator-scripts\") pod \"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6\" (UID: \"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6\") " Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.485437 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c7d2689-33ea-47e3-ae2a-ad3b80f526b6" (UID: "8c7d2689-33ea-47e3-ae2a-ad3b80f526b6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.485473 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkpth\" (UniqueName: \"kubernetes.io/projected/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6-kube-api-access-zkpth\") pod \"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6\" (UID: \"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6\") " Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.485750 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnsw9\" (UniqueName: \"kubernetes.io/projected/92f40fd5-6264-4e1c-a0ff-94f71a0d994c-kube-api-access-nnsw9\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.485766 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c1a3789-de6b-4030-ab64-a9f504133124-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.485775 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.485786 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/677c0fed-1e1f-4155-95ee-86291a16effa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.485795 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/92f40fd5-6264-4e1c-a0ff-94f71a0d994c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.485804 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qshl\" (UniqueName: \"kubernetes.io/projected/677c0fed-1e1f-4155-95ee-86291a16effa-kube-api-access-7qshl\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.485812 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfwxs\" (UniqueName: \"kubernetes.io/projected/0c1a3789-de6b-4030-ab64-a9f504133124-kube-api-access-hfwxs\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.489398 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6-kube-api-access-zkpth" (OuterVolumeSpecName: "kube-api-access-zkpth") pod "8c7d2689-33ea-47e3-ae2a-ad3b80f526b6" (UID: "8c7d2689-33ea-47e3-ae2a-ad3b80f526b6"). InnerVolumeSpecName "kube-api-access-zkpth". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.591015 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkpth\" (UniqueName: \"kubernetes.io/projected/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6-kube-api-access-zkpth\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.811685 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5021-account-create-update-mpthz" event={"ID":"e4df5511-77f2-4005-9179-933a42374141","Type":"ContainerDied","Data":"3f1cff762079a8d34a97c527a7b5f746ed3513f0390ed60e147a9533bf3d4ece"} Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.811751 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f1cff762079a8d34a97c527a7b5f746ed3513f0390ed60e147a9533bf3d4ece" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.811783 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5021-account-create-update-mpthz" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.813169 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-fzqj6" event={"ID":"0c1a3789-de6b-4030-ab64-a9f504133124","Type":"ContainerDied","Data":"214b02c4f683851a0a1e18db3b21a3f55a562742c7455e97c2cc27567c220d25"} Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.813196 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="214b02c4f683851a0a1e18db3b21a3f55a562742c7455e97c2cc27567c220d25" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.813302 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-fzqj6" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.828095 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-39a1-account-create-update-46zz2" event={"ID":"8c7d2689-33ea-47e3-ae2a-ad3b80f526b6","Type":"ContainerDied","Data":"67959d134404df6a0b99776aa10b0d20df484160099ac280233af17e669aa677"} Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.828173 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67959d134404df6a0b99776aa10b0d20df484160099ac280233af17e669aa677" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.828299 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-39a1-account-create-update-46zz2" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.833426 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-8rqmc" event={"ID":"92f40fd5-6264-4e1c-a0ff-94f71a0d994c","Type":"ContainerDied","Data":"b159b785d227e5c516704d3af90f32f3ca735fdcc96594e336f6bb90e6792a44"} Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.833480 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-8rqmc" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.833482 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b159b785d227e5c516704d3af90f32f3ca735fdcc96594e336f6bb90e6792a44" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.843312 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-f352-account-create-update-fzwc2" Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.843553 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-f352-account-create-update-fzwc2" event={"ID":"677c0fed-1e1f-4155-95ee-86291a16effa","Type":"ContainerDied","Data":"b1a7c940d1c563917bcd39222e2a22dde7569112aac34844980c2acf09c56486"} Jan 26 15:06:10 crc kubenswrapper[4823]: I0126 15:06:10.844716 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1a7c940d1c563917bcd39222e2a22dde7569112aac34844980c2acf09c56486" Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.282016 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gp7n5" Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.367681 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-db-sync-config-data\") pod \"461e74af-b7a9-4451-a07d-42f47a806286\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.367854 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-combined-ca-bundle\") pod \"461e74af-b7a9-4451-a07d-42f47a806286\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.368089 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxk4w\" (UniqueName: \"kubernetes.io/projected/461e74af-b7a9-4451-a07d-42f47a806286-kube-api-access-nxk4w\") pod \"461e74af-b7a9-4451-a07d-42f47a806286\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.368141 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-config-data\") pod \"461e74af-b7a9-4451-a07d-42f47a806286\" (UID: \"461e74af-b7a9-4451-a07d-42f47a806286\") " Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.371452 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "461e74af-b7a9-4451-a07d-42f47a806286" (UID: "461e74af-b7a9-4451-a07d-42f47a806286"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.376588 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/461e74af-b7a9-4451-a07d-42f47a806286-kube-api-access-nxk4w" (OuterVolumeSpecName: "kube-api-access-nxk4w") pod "461e74af-b7a9-4451-a07d-42f47a806286" (UID: "461e74af-b7a9-4451-a07d-42f47a806286"). InnerVolumeSpecName "kube-api-access-nxk4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.391779 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "461e74af-b7a9-4451-a07d-42f47a806286" (UID: "461e74af-b7a9-4451-a07d-42f47a806286"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.420251 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-config-data" (OuterVolumeSpecName: "config-data") pod "461e74af-b7a9-4451-a07d-42f47a806286" (UID: "461e74af-b7a9-4451-a07d-42f47a806286"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.470886 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxk4w\" (UniqueName: \"kubernetes.io/projected/461e74af-b7a9-4451-a07d-42f47a806286-kube-api-access-nxk4w\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.470945 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.470962 4823 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.470976 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/461e74af-b7a9-4451-a07d-42f47a806286-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.883726 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h7c79" event={"ID":"c20b6e53-0f09-4af8-8d2b-02c1d50e3730","Type":"ContainerStarted","Data":"820a57e11cdaf4ced5c31c449c7034323316145b9f536e997874a6a3d2bec6f7"} Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.886717 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-gp7n5" event={"ID":"461e74af-b7a9-4451-a07d-42f47a806286","Type":"ContainerDied","Data":"6611467068a848880b7bad7528052bb5f4cd3dad87da4d73c08c195f495beb65"} Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.886768 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6611467068a848880b7bad7528052bb5f4cd3dad87da4d73c08c195f495beb65" Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.886860 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-gp7n5" Jan 26 15:06:14 crc kubenswrapper[4823]: I0126 15:06:14.911288 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-h7c79" podStartSLOduration=2.04912456 podStartE2EDuration="8.911255301s" podCreationTimestamp="2026-01-26 15:06:06 +0000 UTC" firstStartedPulling="2026-01-26 15:06:07.481894587 +0000 UTC m=+1164.167357692" lastFinishedPulling="2026-01-26 15:06:14.344025328 +0000 UTC m=+1171.029488433" observedRunningTime="2026-01-26 15:06:14.90315124 +0000 UTC m=+1171.588614415" watchObservedRunningTime="2026-01-26 15:06:14.911255301 +0000 UTC m=+1171.596718437" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.807773 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-2ktzn"] Jan 26 15:06:15 crc kubenswrapper[4823]: E0126 15:06:15.808471 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="677c0fed-1e1f-4155-95ee-86291a16effa" containerName="mariadb-account-create-update" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808488 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="677c0fed-1e1f-4155-95ee-86291a16effa" containerName="mariadb-account-create-update" Jan 26 15:06:15 crc kubenswrapper[4823]: E0126 15:06:15.808503 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e91b70-4f0c-4abc-bbb5-c7f75dc94736" containerName="mariadb-database-create" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808509 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e91b70-4f0c-4abc-bbb5-c7f75dc94736" containerName="mariadb-database-create" Jan 26 15:06:15 crc kubenswrapper[4823]: E0126 15:06:15.808517 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c7d2689-33ea-47e3-ae2a-ad3b80f526b6" containerName="mariadb-account-create-update" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808523 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c7d2689-33ea-47e3-ae2a-ad3b80f526b6" containerName="mariadb-account-create-update" Jan 26 15:06:15 crc kubenswrapper[4823]: E0126 15:06:15.808545 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="461e74af-b7a9-4451-a07d-42f47a806286" containerName="glance-db-sync" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808551 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="461e74af-b7a9-4451-a07d-42f47a806286" containerName="glance-db-sync" Jan 26 15:06:15 crc kubenswrapper[4823]: E0126 15:06:15.808562 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4df5511-77f2-4005-9179-933a42374141" containerName="mariadb-account-create-update" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808568 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4df5511-77f2-4005-9179-933a42374141" containerName="mariadb-account-create-update" Jan 26 15:06:15 crc kubenswrapper[4823]: E0126 15:06:15.808584 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c1a3789-de6b-4030-ab64-a9f504133124" containerName="mariadb-database-create" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808590 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c1a3789-de6b-4030-ab64-a9f504133124" containerName="mariadb-database-create" Jan 26 15:06:15 crc kubenswrapper[4823]: E0126 15:06:15.808600 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f40fd5-6264-4e1c-a0ff-94f71a0d994c" containerName="mariadb-database-create" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808606 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f40fd5-6264-4e1c-a0ff-94f71a0d994c" containerName="mariadb-database-create" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808778 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="677c0fed-1e1f-4155-95ee-86291a16effa" containerName="mariadb-account-create-update" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808791 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c1a3789-de6b-4030-ab64-a9f504133124" containerName="mariadb-database-create" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808802 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="461e74af-b7a9-4451-a07d-42f47a806286" containerName="glance-db-sync" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808814 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e91b70-4f0c-4abc-bbb5-c7f75dc94736" containerName="mariadb-database-create" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808824 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4df5511-77f2-4005-9179-933a42374141" containerName="mariadb-account-create-update" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808830 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f40fd5-6264-4e1c-a0ff-94f71a0d994c" containerName="mariadb-database-create" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.808841 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c7d2689-33ea-47e3-ae2a-ad3b80f526b6" containerName="mariadb-account-create-update" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.809738 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.870076 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-2ktzn"] Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.897052 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-dns-svc\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.897098 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.897140 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-config\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.897166 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g56qz\" (UniqueName: \"kubernetes.io/projected/9d8cad23-4493-4794-9126-cfabd3d31f32-kube-api-access-g56qz\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.897182 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.999733 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-config\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:15 crc kubenswrapper[4823]: I0126 15:06:15.999820 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g56qz\" (UniqueName: \"kubernetes.io/projected/9d8cad23-4493-4794-9126-cfabd3d31f32-kube-api-access-g56qz\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:15.999855 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:16.000037 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-dns-svc\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:16.000064 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:16.001002 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-config\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:16.001229 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-ovsdbserver-nb\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:16.001295 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-ovsdbserver-sb\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:16.002054 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-dns-svc\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:16.040946 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g56qz\" (UniqueName: \"kubernetes.io/projected/9d8cad23-4493-4794-9126-cfabd3d31f32-kube-api-access-g56qz\") pod \"dnsmasq-dns-554567b4f7-2ktzn\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:16.125287 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:16.476660 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-2ktzn"] Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:16.907100 4823 generic.go:334] "Generic (PLEG): container finished" podID="9d8cad23-4493-4794-9126-cfabd3d31f32" containerID="f1ae4da702002b348a4c92f61c55d97e2f00ac3a3e1cfc0c127bd8e3275e2951" exitCode=0 Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:16.907173 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" event={"ID":"9d8cad23-4493-4794-9126-cfabd3d31f32","Type":"ContainerDied","Data":"f1ae4da702002b348a4c92f61c55d97e2f00ac3a3e1cfc0c127bd8e3275e2951"} Jan 26 15:06:16 crc kubenswrapper[4823]: I0126 15:06:16.907214 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" event={"ID":"9d8cad23-4493-4794-9126-cfabd3d31f32","Type":"ContainerStarted","Data":"42c3b5a1de05dd21272202cc09bd9b53a4f7740522d2e7b3c7fb125f803cd416"} Jan 26 15:06:17 crc kubenswrapper[4823]: I0126 15:06:17.917809 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" event={"ID":"9d8cad23-4493-4794-9126-cfabd3d31f32","Type":"ContainerStarted","Data":"60569e413729c43960218876d3bcde04e7b9b84804be93f6fc0e2430ebc08d47"} Jan 26 15:06:17 crc kubenswrapper[4823]: I0126 15:06:17.918894 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:17 crc kubenswrapper[4823]: I0126 15:06:17.950578 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" podStartSLOduration=2.9505528229999998 podStartE2EDuration="2.950552823s" podCreationTimestamp="2026-01-26 15:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:06:17.944806047 +0000 UTC m=+1174.630269152" watchObservedRunningTime="2026-01-26 15:06:17.950552823 +0000 UTC m=+1174.636015928" Jan 26 15:06:19 crc kubenswrapper[4823]: I0126 15:06:19.938666 4823 generic.go:334] "Generic (PLEG): container finished" podID="c20b6e53-0f09-4af8-8d2b-02c1d50e3730" containerID="820a57e11cdaf4ced5c31c449c7034323316145b9f536e997874a6a3d2bec6f7" exitCode=0 Jan 26 15:06:19 crc kubenswrapper[4823]: I0126 15:06:19.938742 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h7c79" event={"ID":"c20b6e53-0f09-4af8-8d2b-02c1d50e3730","Type":"ContainerDied","Data":"820a57e11cdaf4ced5c31c449c7034323316145b9f536e997874a6a3d2bec6f7"} Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.327805 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.446252 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjw8l\" (UniqueName: \"kubernetes.io/projected/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-kube-api-access-vjw8l\") pod \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\" (UID: \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\") " Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.446422 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-combined-ca-bundle\") pod \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\" (UID: \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\") " Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.446490 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-config-data\") pod \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\" (UID: \"c20b6e53-0f09-4af8-8d2b-02c1d50e3730\") " Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.454381 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-kube-api-access-vjw8l" (OuterVolumeSpecName: "kube-api-access-vjw8l") pod "c20b6e53-0f09-4af8-8d2b-02c1d50e3730" (UID: "c20b6e53-0f09-4af8-8d2b-02c1d50e3730"). InnerVolumeSpecName "kube-api-access-vjw8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.475496 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c20b6e53-0f09-4af8-8d2b-02c1d50e3730" (UID: "c20b6e53-0f09-4af8-8d2b-02c1d50e3730"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.505104 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-config-data" (OuterVolumeSpecName: "config-data") pod "c20b6e53-0f09-4af8-8d2b-02c1d50e3730" (UID: "c20b6e53-0f09-4af8-8d2b-02c1d50e3730"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.548544 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjw8l\" (UniqueName: \"kubernetes.io/projected/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-kube-api-access-vjw8l\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.548594 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.548609 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c20b6e53-0f09-4af8-8d2b-02c1d50e3730-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.964429 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h7c79" event={"ID":"c20b6e53-0f09-4af8-8d2b-02c1d50e3730","Type":"ContainerDied","Data":"8d01df6fa5e9154d0b5b447e3592dca8cb23f80a23381f9374995767ce95c86c"} Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.964515 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d01df6fa5e9154d0b5b447e3592dca8cb23f80a23381f9374995767ce95c86c" Jan 26 15:06:21 crc kubenswrapper[4823]: I0126 15:06:21.964928 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h7c79" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.286197 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-2ktzn"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.286997 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" podUID="9d8cad23-4493-4794-9126-cfabd3d31f32" containerName="dnsmasq-dns" containerID="cri-o://60569e413729c43960218876d3bcde04e7b9b84804be93f6fc0e2430ebc08d47" gracePeriod=10 Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.292615 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.299229 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lsbqj"] Jan 26 15:06:22 crc kubenswrapper[4823]: E0126 15:06:22.299730 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c20b6e53-0f09-4af8-8d2b-02c1d50e3730" containerName="keystone-db-sync" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.299755 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c20b6e53-0f09-4af8-8d2b-02c1d50e3730" containerName="keystone-db-sync" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.300007 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c20b6e53-0f09-4af8-8d2b-02c1d50e3730" containerName="keystone-db-sync" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.302348 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.313418 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.313699 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.313934 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.314105 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.320561 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vkwn7" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.366758 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lsbqj"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.376841 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-combined-ca-bundle\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.376903 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-fernet-keys\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.376998 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfxxr\" (UniqueName: \"kubernetes.io/projected/3121c402-5e42-483c-8258-6683404f5f3e-kube-api-access-kfxxr\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.377033 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-config-data\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.377107 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-credential-keys\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.377148 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-scripts\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.419667 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67795cd9-8pkcw"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.435219 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-8pkcw"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.435378 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.479482 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-credential-keys\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.479549 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-scripts\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.479624 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-combined-ca-bundle\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.479647 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-fernet-keys\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.479731 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfxxr\" (UniqueName: \"kubernetes.io/projected/3121c402-5e42-483c-8258-6683404f5f3e-kube-api-access-kfxxr\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.479760 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-config-data\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.492035 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-scripts\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.495515 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-config-data\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.495877 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-combined-ca-bundle\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.497197 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-fernet-keys\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.498563 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-credential-keys\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.521416 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfxxr\" (UniqueName: \"kubernetes.io/projected/3121c402-5e42-483c-8258-6683404f5f3e-kube-api-access-kfxxr\") pod \"keystone-bootstrap-lsbqj\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.570985 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-qx574"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.585576 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.586036 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-config\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.586067 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-dns-svc\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.586109 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t8sz\" (UniqueName: \"kubernetes.io/projected/536722ad-aa26-46e2-bea7-8098b47fffc4-kube-api-access-9t8sz\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.586140 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.598035 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.609159 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-796c7d56c9-6xg5x"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.610602 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.624003 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.624128 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pq89g" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.624335 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.624431 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.624468 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.624587 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.624595 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-rxqrg" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.631630 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.643288 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qx574"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.664996 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-9c2rp"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.674920 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688132 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-config-data\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688183 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48ff8239-374f-4321-ad90-a17b01a30a72-config-data\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688210 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t8sz\" (UniqueName: \"kubernetes.io/projected/536722ad-aa26-46e2-bea7-8098b47fffc4-kube-api-access-9t8sz\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688253 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688276 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-etc-machine-id\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688311 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-combined-ca-bundle\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688332 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688381 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qmsg\" (UniqueName: \"kubernetes.io/projected/48ff8239-374f-4321-ad90-a17b01a30a72-kube-api-access-9qmsg\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688404 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48ff8239-374f-4321-ad90-a17b01a30a72-scripts\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688444 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/48ff8239-374f-4321-ad90-a17b01a30a72-horizon-secret-key\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688459 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48ff8239-374f-4321-ad90-a17b01a30a72-logs\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688492 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbn8\" (UniqueName: \"kubernetes.io/projected/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-kube-api-access-brbn8\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688534 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-db-sync-config-data\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688568 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-scripts\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688591 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-config\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.688613 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-dns-svc\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.694406 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-ovsdbserver-sb\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.694502 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.694755 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.694860 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-f778b" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.694972 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-ovsdbserver-nb\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.698283 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-config\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.712583 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9c2rp"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.721111 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-dns-svc\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.744832 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t8sz\" (UniqueName: \"kubernetes.io/projected/536722ad-aa26-46e2-bea7-8098b47fffc4-kube-api-access-9t8sz\") pod \"dnsmasq-dns-67795cd9-8pkcw\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.756758 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-796c7d56c9-6xg5x"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793256 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-db-sync-config-data\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793330 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-config\") pod \"neutron-db-sync-9c2rp\" (UID: \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\") " pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793376 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-combined-ca-bundle\") pod \"neutron-db-sync-9c2rp\" (UID: \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\") " pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793422 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-scripts\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793441 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsktx\" (UniqueName: \"kubernetes.io/projected/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-kube-api-access-zsktx\") pod \"neutron-db-sync-9c2rp\" (UID: \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\") " pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793484 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-config-data\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793507 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48ff8239-374f-4321-ad90-a17b01a30a72-config-data\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-etc-machine-id\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793581 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-combined-ca-bundle\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793607 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qmsg\" (UniqueName: \"kubernetes.io/projected/48ff8239-374f-4321-ad90-a17b01a30a72-kube-api-access-9qmsg\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793626 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48ff8239-374f-4321-ad90-a17b01a30a72-scripts\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793653 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/48ff8239-374f-4321-ad90-a17b01a30a72-horizon-secret-key\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793668 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48ff8239-374f-4321-ad90-a17b01a30a72-logs\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.793693 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbn8\" (UniqueName: \"kubernetes.io/projected/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-kube-api-access-brbn8\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.806900 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48ff8239-374f-4321-ad90-a17b01a30a72-config-data\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.814060 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-db-sync-config-data\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.814156 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.819048 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-scripts\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.821498 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-etc-machine-id\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.823128 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48ff8239-374f-4321-ad90-a17b01a30a72-scripts\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.823352 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48ff8239-374f-4321-ad90-a17b01a30a72-logs\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.826541 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.856268 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/48ff8239-374f-4321-ad90-a17b01a30a72-horizon-secret-key\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.862779 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qmsg\" (UniqueName: \"kubernetes.io/projected/48ff8239-374f-4321-ad90-a17b01a30a72-kube-api-access-9qmsg\") pod \"horizon-796c7d56c9-6xg5x\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.871956 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.878813 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.884189 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.892201 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.896530 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.896588 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.896688 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7565d18e-0ce0-432d-ab8f-10c43561b9f8-log-httpd\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.896793 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-config-data\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.899634 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-combined-ca-bundle\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.906639 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbn8\" (UniqueName: \"kubernetes.io/projected/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-kube-api-access-brbn8\") pod \"cinder-db-sync-qx574\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.932095 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.932503 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-8pkcw"] Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.932951 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-694zm\" (UniqueName: \"kubernetes.io/projected/7565d18e-0ce0-432d-ab8f-10c43561b9f8-kube-api-access-694zm\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.932985 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-config-data\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.933028 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-scripts\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.933105 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-config\") pod \"neutron-db-sync-9c2rp\" (UID: \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\") " pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.933148 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-combined-ca-bundle\") pod \"neutron-db-sync-9c2rp\" (UID: \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\") " pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.933176 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsktx\" (UniqueName: \"kubernetes.io/projected/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-kube-api-access-zsktx\") pod \"neutron-db-sync-9c2rp\" (UID: \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\") " pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.933198 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7565d18e-0ce0-432d-ab8f-10c43561b9f8-run-httpd\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.967289 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-combined-ca-bundle\") pod \"neutron-db-sync-9c2rp\" (UID: \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\") " pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:06:22 crc kubenswrapper[4823]: I0126 15:06:22.973719 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-config\") pod \"neutron-db-sync-9c2rp\" (UID: \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\") " pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:22.994113 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-fs2xh"] Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.001437 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qx574" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.001961 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsktx\" (UniqueName: \"kubernetes.io/projected/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-kube-api-access-zsktx\") pod \"neutron-db-sync-9c2rp\" (UID: \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\") " pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.001683 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.012227 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k7zsm" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.012716 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.012940 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.086283 4823 generic.go:334] "Generic (PLEG): container finished" podID="9d8cad23-4493-4794-9126-cfabd3d31f32" containerID="60569e413729c43960218876d3bcde04e7b9b84804be93f6fc0e2430ebc08d47" exitCode=0 Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.086349 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" event={"ID":"9d8cad23-4493-4794-9126-cfabd3d31f32","Type":"ContainerDied","Data":"60569e413729c43960218876d3bcde04e7b9b84804be93f6fc0e2430ebc08d47"} Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.090908 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-nn9br"] Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.098870 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nn9br" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.101727 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj"] Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.104322 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.105756 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.105810 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.105863 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbp7g\" (UniqueName: \"kubernetes.io/projected/c5c91a8b-7077-4583-aa19-595408fb9003-kube-api-access-cbp7g\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.105884 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-combined-ca-bundle\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.105941 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c91a8b-7077-4583-aa19-595408fb9003-logs\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.105992 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7565d18e-0ce0-432d-ab8f-10c43561b9f8-log-httpd\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.106016 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-694zm\" (UniqueName: \"kubernetes.io/projected/7565d18e-0ce0-432d-ab8f-10c43561b9f8-kube-api-access-694zm\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.106044 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-config-data\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.106080 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-scripts\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.106168 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7565d18e-0ce0-432d-ab8f-10c43561b9f8-run-httpd\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.106214 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-scripts\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.106258 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-config-data\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.107621 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.107726 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7565d18e-0ce0-432d-ab8f-10c43561b9f8-log-httpd\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.108059 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7565d18e-0ce0-432d-ab8f-10c43561b9f8-run-httpd\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.114068 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-26l4g" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.117410 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-config-data\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.120908 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-scripts\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.133251 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-694zm\" (UniqueName: \"kubernetes.io/projected/7565d18e-0ce0-432d-ab8f-10c43561b9f8-kube-api-access-694zm\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.141633 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.162121 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.177520 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-fs2xh"] Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.203592 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nn9br"] Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.208659 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-scripts\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.208709 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-config-data\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.208766 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-combined-ca-bundle\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.208784 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbp7g\" (UniqueName: \"kubernetes.io/projected/c5c91a8b-7077-4583-aa19-595408fb9003-kube-api-access-cbp7g\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.208817 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c91a8b-7077-4583-aa19-595408fb9003-logs\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.215426 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c91a8b-7077-4583-aa19-595408fb9003-logs\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.217777 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj"] Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.221255 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.225183 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-scripts\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.227113 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-combined-ca-bundle\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.232482 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-config-data\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.236654 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.237015 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-85c7bb7457-vtqk5"] Jan 26 15:06:23 crc kubenswrapper[4823]: E0126 15:06:23.237640 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8cad23-4493-4794-9126-cfabd3d31f32" containerName="init" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.237670 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8cad23-4493-4794-9126-cfabd3d31f32" containerName="init" Jan 26 15:06:23 crc kubenswrapper[4823]: E0126 15:06:23.237686 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8cad23-4493-4794-9126-cfabd3d31f32" containerName="dnsmasq-dns" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.237693 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8cad23-4493-4794-9126-cfabd3d31f32" containerName="dnsmasq-dns" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.237948 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8cad23-4493-4794-9126-cfabd3d31f32" containerName="dnsmasq-dns" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.239693 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.249526 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85c7bb7457-vtqk5"] Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.280336 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbp7g\" (UniqueName: \"kubernetes.io/projected/c5c91a8b-7077-4583-aa19-595408fb9003-kube-api-access-cbp7g\") pod \"placement-db-sync-fs2xh\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.281525 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.311566 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcb08f2-c175-4602-9a45-dad635436a22-combined-ca-bundle\") pod \"barbican-db-sync-nn9br\" (UID: \"2dcb08f2-c175-4602-9a45-dad635436a22\") " pod="openstack/barbican-db-sync-nn9br" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.311664 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc6bv\" (UniqueName: \"kubernetes.io/projected/2dcb08f2-c175-4602-9a45-dad635436a22-kube-api-access-lc6bv\") pod \"barbican-db-sync-nn9br\" (UID: \"2dcb08f2-c175-4602-9a45-dad635436a22\") " pod="openstack/barbican-db-sync-nn9br" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.311708 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.311763 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-config\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.311798 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.311861 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2dcb08f2-c175-4602-9a45-dad635436a22-db-sync-config-data\") pod \"barbican-db-sync-nn9br\" (UID: \"2dcb08f2-c175-4602-9a45-dad635436a22\") " pod="openstack/barbican-db-sync-nn9br" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.313909 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.314129 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgdwq\" (UniqueName: \"kubernetes.io/projected/bfb8b15e-b589-4777-ab0d-703cba188a74-kube-api-access-dgdwq\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.400496 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fs2xh" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.416158 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-ovsdbserver-nb\") pod \"9d8cad23-4493-4794-9126-cfabd3d31f32\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.416279 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g56qz\" (UniqueName: \"kubernetes.io/projected/9d8cad23-4493-4794-9126-cfabd3d31f32-kube-api-access-g56qz\") pod \"9d8cad23-4493-4794-9126-cfabd3d31f32\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.416357 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-config\") pod \"9d8cad23-4493-4794-9126-cfabd3d31f32\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.416478 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-dns-svc\") pod \"9d8cad23-4493-4794-9126-cfabd3d31f32\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.416539 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-ovsdbserver-sb\") pod \"9d8cad23-4493-4794-9126-cfabd3d31f32\" (UID: \"9d8cad23-4493-4794-9126-cfabd3d31f32\") " Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.416805 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-config\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.416841 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.416872 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2782d482-e2f7-446f-86b1-d9e0933ed53b-horizon-secret-key\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.416903 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2782d482-e2f7-446f-86b1-d9e0933ed53b-logs\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.416963 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2dcb08f2-c175-4602-9a45-dad635436a22-db-sync-config-data\") pod \"barbican-db-sync-nn9br\" (UID: \"2dcb08f2-c175-4602-9a45-dad635436a22\") " pod="openstack/barbican-db-sync-nn9br" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.416991 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.417025 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgdwq\" (UniqueName: \"kubernetes.io/projected/bfb8b15e-b589-4777-ab0d-703cba188a74-kube-api-access-dgdwq\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.417063 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2782d482-e2f7-446f-86b1-d9e0933ed53b-config-data\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.417084 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcb08f2-c175-4602-9a45-dad635436a22-combined-ca-bundle\") pod \"barbican-db-sync-nn9br\" (UID: \"2dcb08f2-c175-4602-9a45-dad635436a22\") " pod="openstack/barbican-db-sync-nn9br" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.417109 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2782d482-e2f7-446f-86b1-d9e0933ed53b-scripts\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.417143 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sbhj\" (UniqueName: \"kubernetes.io/projected/2782d482-e2f7-446f-86b1-d9e0933ed53b-kube-api-access-9sbhj\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.417164 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc6bv\" (UniqueName: \"kubernetes.io/projected/2dcb08f2-c175-4602-9a45-dad635436a22-kube-api-access-lc6bv\") pod \"barbican-db-sync-nn9br\" (UID: \"2dcb08f2-c175-4602-9a45-dad635436a22\") " pod="openstack/barbican-db-sync-nn9br" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.417184 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.418463 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.426449 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-config\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.427179 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.434210 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.437471 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8cad23-4493-4794-9126-cfabd3d31f32-kube-api-access-g56qz" (OuterVolumeSpecName: "kube-api-access-g56qz") pod "9d8cad23-4493-4794-9126-cfabd3d31f32" (UID: "9d8cad23-4493-4794-9126-cfabd3d31f32"). InnerVolumeSpecName "kube-api-access-g56qz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.439411 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2dcb08f2-c175-4602-9a45-dad635436a22-db-sync-config-data\") pod \"barbican-db-sync-nn9br\" (UID: \"2dcb08f2-c175-4602-9a45-dad635436a22\") " pod="openstack/barbican-db-sync-nn9br" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.441400 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcb08f2-c175-4602-9a45-dad635436a22-combined-ca-bundle\") pod \"barbican-db-sync-nn9br\" (UID: \"2dcb08f2-c175-4602-9a45-dad635436a22\") " pod="openstack/barbican-db-sync-nn9br" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.459892 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc6bv\" (UniqueName: \"kubernetes.io/projected/2dcb08f2-c175-4602-9a45-dad635436a22-kube-api-access-lc6bv\") pod \"barbican-db-sync-nn9br\" (UID: \"2dcb08f2-c175-4602-9a45-dad635436a22\") " pod="openstack/barbican-db-sync-nn9br" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.464389 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nn9br" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.483654 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgdwq\" (UniqueName: \"kubernetes.io/projected/bfb8b15e-b589-4777-ab0d-703cba188a74-kube-api-access-dgdwq\") pod \"dnsmasq-dns-5b6dbdb6f5-q2tfj\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.499104 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.522347 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2782d482-e2f7-446f-86b1-d9e0933ed53b-horizon-secret-key\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.536574 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2782d482-e2f7-446f-86b1-d9e0933ed53b-logs\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.536953 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2782d482-e2f7-446f-86b1-d9e0933ed53b-config-data\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.537022 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2782d482-e2f7-446f-86b1-d9e0933ed53b-scripts\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.537109 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sbhj\" (UniqueName: \"kubernetes.io/projected/2782d482-e2f7-446f-86b1-d9e0933ed53b-kube-api-access-9sbhj\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.537264 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g56qz\" (UniqueName: \"kubernetes.io/projected/9d8cad23-4493-4794-9126-cfabd3d31f32-kube-api-access-g56qz\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.537350 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2782d482-e2f7-446f-86b1-d9e0933ed53b-logs\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.538668 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2782d482-e2f7-446f-86b1-d9e0933ed53b-config-data\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.539264 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2782d482-e2f7-446f-86b1-d9e0933ed53b-scripts\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.595209 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d8cad23-4493-4794-9126-cfabd3d31f32" (UID: "9d8cad23-4493-4794-9126-cfabd3d31f32"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.610717 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2782d482-e2f7-446f-86b1-d9e0933ed53b-horizon-secret-key\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.639014 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.641957 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sbhj\" (UniqueName: \"kubernetes.io/projected/2782d482-e2f7-446f-86b1-d9e0933ed53b-kube-api-access-9sbhj\") pod \"horizon-85c7bb7457-vtqk5\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.687511 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-config" (OuterVolumeSpecName: "config") pod "9d8cad23-4493-4794-9126-cfabd3d31f32" (UID: "9d8cad23-4493-4794-9126-cfabd3d31f32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.709231 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d8cad23-4493-4794-9126-cfabd3d31f32" (UID: "9d8cad23-4493-4794-9126-cfabd3d31f32"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.709724 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d8cad23-4493-4794-9126-cfabd3d31f32" (UID: "9d8cad23-4493-4794-9126-cfabd3d31f32"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.727815 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lsbqj"] Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.743705 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.743747 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.743759 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8cad23-4493-4794-9126-cfabd3d31f32-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.792319 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-8pkcw"] Jan 26 15:06:23 crc kubenswrapper[4823]: W0126 15:06:23.825628 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod536722ad_aa26_46e2_bea7_8098b47fffc4.slice/crio-92b6c5df3045264060baea3ec6b4e839be0b09a92b1f8f6c406fda287272bfac WatchSource:0}: Error finding container 92b6c5df3045264060baea3ec6b4e839be0b09a92b1f8f6c406fda287272bfac: Status 404 returned error can't find the container with id 92b6c5df3045264060baea3ec6b4e839be0b09a92b1f8f6c406fda287272bfac Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.855068 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-796c7d56c9-6xg5x"] Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.890955 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:23 crc kubenswrapper[4823]: W0126 15:06:23.948159 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ae97ed0_0d88_4581_ab58_b4a97f8947ad.slice/crio-c1f7846ddce442f12937c91c7c2e48c6a8b142b5c513d5eba71b3644dabc6ddc WatchSource:0}: Error finding container c1f7846ddce442f12937c91c7c2e48c6a8b142b5c513d5eba71b3644dabc6ddc: Status 404 returned error can't find the container with id c1f7846ddce442f12937c91c7c2e48c6a8b142b5c513d5eba71b3644dabc6ddc Jan 26 15:06:23 crc kubenswrapper[4823]: I0126 15:06:23.955829 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-qx574"] Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.100989 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-796c7d56c9-6xg5x" event={"ID":"48ff8239-374f-4321-ad90-a17b01a30a72","Type":"ContainerStarted","Data":"a0a3f95aa6df6d5d09c392c25ce3ef506d6829f69caf4e73135a83749fe4b28c"} Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.106938 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-8pkcw" event={"ID":"536722ad-aa26-46e2-bea7-8098b47fffc4","Type":"ContainerStarted","Data":"92b6c5df3045264060baea3ec6b4e839be0b09a92b1f8f6c406fda287272bfac"} Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.114400 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" event={"ID":"9d8cad23-4493-4794-9126-cfabd3d31f32","Type":"ContainerDied","Data":"42c3b5a1de05dd21272202cc09bd9b53a4f7740522d2e7b3c7fb125f803cd416"} Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.115192 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-554567b4f7-2ktzn" Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.118255 4823 scope.go:117] "RemoveContainer" containerID="60569e413729c43960218876d3bcde04e7b9b84804be93f6fc0e2430ebc08d47" Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.122033 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lsbqj" event={"ID":"3121c402-5e42-483c-8258-6683404f5f3e","Type":"ContainerStarted","Data":"45609188fb6f3e9f7b644220d0f96d44b50a32d7ca98e30077d7afb33fa09acf"} Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.130322 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qx574" event={"ID":"3ae97ed0-0d88-4581-ab58-b4a97f8947ad","Type":"ContainerStarted","Data":"c1f7846ddce442f12937c91c7c2e48c6a8b142b5c513d5eba71b3644dabc6ddc"} Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.154252 4823 scope.go:117] "RemoveContainer" containerID="f1ae4da702002b348a4c92f61c55d97e2f00ac3a3e1cfc0c127bd8e3275e2951" Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.155917 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-2ktzn"] Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.171280 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-554567b4f7-2ktzn"] Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.222315 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-9c2rp"] Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.263721 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.408225 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj"] Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.418492 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-fs2xh"] Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.425685 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-nn9br"] Jan 26 15:06:24 crc kubenswrapper[4823]: W0126 15:06:24.431062 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dcb08f2_c175_4602_9a45_dad635436a22.slice/crio-6a2ae3427f3e1ae7afae0a1ae327646813b949f0d0e3ee59cca17ac543f51924 WatchSource:0}: Error finding container 6a2ae3427f3e1ae7afae0a1ae327646813b949f0d0e3ee59cca17ac543f51924: Status 404 returned error can't find the container with id 6a2ae3427f3e1ae7afae0a1ae327646813b949f0d0e3ee59cca17ac543f51924 Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.683534 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-85c7bb7457-vtqk5"] Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.845303 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-796c7d56c9-6xg5x"] Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.921449 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56c59d768f-mmj9s"] Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.923190 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.933615 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:06:24 crc kubenswrapper[4823]: I0126 15:06:24.969845 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56c59d768f-mmj9s"] Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.098578 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm28p\" (UniqueName: \"kubernetes.io/projected/7826fb1c-12c8-42a5-9431-7d22aa0ea308-kube-api-access-bm28p\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.098654 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7826fb1c-12c8-42a5-9431-7d22aa0ea308-logs\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.098729 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7826fb1c-12c8-42a5-9431-7d22aa0ea308-scripts\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.098777 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7826fb1c-12c8-42a5-9431-7d22aa0ea308-config-data\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.098800 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7826fb1c-12c8-42a5-9431-7d22aa0ea308-horizon-secret-key\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.145560 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nn9br" event={"ID":"2dcb08f2-c175-4602-9a45-dad635436a22","Type":"ContainerStarted","Data":"6a2ae3427f3e1ae7afae0a1ae327646813b949f0d0e3ee59cca17ac543f51924"} Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.148533 4823 generic.go:334] "Generic (PLEG): container finished" podID="536722ad-aa26-46e2-bea7-8098b47fffc4" containerID="8aeaf5bd8749d11ef463f494123b9316b37b8c8d19af48559fb5775dab803a39" exitCode=0 Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.148605 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-8pkcw" event={"ID":"536722ad-aa26-46e2-bea7-8098b47fffc4","Type":"ContainerDied","Data":"8aeaf5bd8749d11ef463f494123b9316b37b8c8d19af48559fb5775dab803a39"} Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.159161 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lsbqj" event={"ID":"3121c402-5e42-483c-8258-6683404f5f3e","Type":"ContainerStarted","Data":"f51ddb6c216396b0a1e52278ec724444b12385fd1f9fff1815b49a839766b6a1"} Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.164704 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fs2xh" event={"ID":"c5c91a8b-7077-4583-aa19-595408fb9003","Type":"ContainerStarted","Data":"d2f22428b889a5a1c2d7d606d82879a851941ee6f994636d1e387bee84066b09"} Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.168210 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7565d18e-0ce0-432d-ab8f-10c43561b9f8","Type":"ContainerStarted","Data":"c252aa106c5a1e25a6c8df640a52b48eb37ea72b1cc21b61d0af29d5ea112fde"} Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.188085 4823 generic.go:334] "Generic (PLEG): container finished" podID="bfb8b15e-b589-4777-ab0d-703cba188a74" containerID="79293303cbf1cce027eb6bed38d5c7c790214d79ce23e19f0cb4559c08637b8d" exitCode=0 Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.188215 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" event={"ID":"bfb8b15e-b589-4777-ab0d-703cba188a74","Type":"ContainerDied","Data":"79293303cbf1cce027eb6bed38d5c7c790214d79ce23e19f0cb4559c08637b8d"} Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.188256 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" event={"ID":"bfb8b15e-b589-4777-ab0d-703cba188a74","Type":"ContainerStarted","Data":"4dd334135473b0fd4a7d2a7a9e9f36fe70ae019ed20c24c9cf036158360240fe"} Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.203103 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm28p\" (UniqueName: \"kubernetes.io/projected/7826fb1c-12c8-42a5-9431-7d22aa0ea308-kube-api-access-bm28p\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.203646 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7826fb1c-12c8-42a5-9431-7d22aa0ea308-logs\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.205491 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7826fb1c-12c8-42a5-9431-7d22aa0ea308-scripts\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.206453 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7826fb1c-12c8-42a5-9431-7d22aa0ea308-logs\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.206775 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85c7bb7457-vtqk5" event={"ID":"2782d482-e2f7-446f-86b1-d9e0933ed53b","Type":"ContainerStarted","Data":"6fa160563fb2410737ce7d68319101c5b5c04804341b73039ba9426692da4120"} Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.206938 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7826fb1c-12c8-42a5-9431-7d22aa0ea308-scripts\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.207314 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7826fb1c-12c8-42a5-9431-7d22aa0ea308-config-data\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.207524 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7826fb1c-12c8-42a5-9431-7d22aa0ea308-horizon-secret-key\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.211705 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7826fb1c-12c8-42a5-9431-7d22aa0ea308-config-data\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.215860 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7826fb1c-12c8-42a5-9431-7d22aa0ea308-horizon-secret-key\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.219855 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9c2rp" event={"ID":"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e","Type":"ContainerStarted","Data":"d2e0078e1fb0c6aba703a6928db4a92b7391e435673e16e0c87e303a9182265b"} Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.219909 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9c2rp" event={"ID":"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e","Type":"ContainerStarted","Data":"39f729fd03612b2b512ab8efe61c1823632c7fe62a05eed118ab9cc4938aa609"} Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.233141 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm28p\" (UniqueName: \"kubernetes.io/projected/7826fb1c-12c8-42a5-9431-7d22aa0ea308-kube-api-access-bm28p\") pod \"horizon-56c59d768f-mmj9s\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.243598 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lsbqj" podStartSLOduration=3.243573886 podStartE2EDuration="3.243573886s" podCreationTimestamp="2026-01-26 15:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:06:25.209667041 +0000 UTC m=+1181.895130176" watchObservedRunningTime="2026-01-26 15:06:25.243573886 +0000 UTC m=+1181.929036991" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.267868 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-9c2rp" podStartSLOduration=3.267848219 podStartE2EDuration="3.267848219s" podCreationTimestamp="2026-01-26 15:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:06:25.264891959 +0000 UTC m=+1181.950355064" watchObservedRunningTime="2026-01-26 15:06:25.267848219 +0000 UTC m=+1181.953311324" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.320916 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.576052 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d8cad23-4493-4794-9126-cfabd3d31f32" path="/var/lib/kubelet/pods/9d8cad23-4493-4794-9126-cfabd3d31f32/volumes" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.703598 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.823250 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-config\") pod \"536722ad-aa26-46e2-bea7-8098b47fffc4\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.823383 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-ovsdbserver-nb\") pod \"536722ad-aa26-46e2-bea7-8098b47fffc4\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.823524 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-ovsdbserver-sb\") pod \"536722ad-aa26-46e2-bea7-8098b47fffc4\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.823618 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-dns-svc\") pod \"536722ad-aa26-46e2-bea7-8098b47fffc4\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.823641 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t8sz\" (UniqueName: \"kubernetes.io/projected/536722ad-aa26-46e2-bea7-8098b47fffc4-kube-api-access-9t8sz\") pod \"536722ad-aa26-46e2-bea7-8098b47fffc4\" (UID: \"536722ad-aa26-46e2-bea7-8098b47fffc4\") " Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.849272 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536722ad-aa26-46e2-bea7-8098b47fffc4-kube-api-access-9t8sz" (OuterVolumeSpecName: "kube-api-access-9t8sz") pod "536722ad-aa26-46e2-bea7-8098b47fffc4" (UID: "536722ad-aa26-46e2-bea7-8098b47fffc4"). InnerVolumeSpecName "kube-api-access-9t8sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.852631 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "536722ad-aa26-46e2-bea7-8098b47fffc4" (UID: "536722ad-aa26-46e2-bea7-8098b47fffc4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.862051 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "536722ad-aa26-46e2-bea7-8098b47fffc4" (UID: "536722ad-aa26-46e2-bea7-8098b47fffc4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.879468 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-config" (OuterVolumeSpecName: "config") pod "536722ad-aa26-46e2-bea7-8098b47fffc4" (UID: "536722ad-aa26-46e2-bea7-8098b47fffc4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.879914 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "536722ad-aa26-46e2-bea7-8098b47fffc4" (UID: "536722ad-aa26-46e2-bea7-8098b47fffc4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.904985 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56c59d768f-mmj9s"] Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.941050 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.941105 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.941153 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t8sz\" (UniqueName: \"kubernetes.io/projected/536722ad-aa26-46e2-bea7-8098b47fffc4-kube-api-access-9t8sz\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.941178 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:25 crc kubenswrapper[4823]: I0126 15:06:25.941191 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/536722ad-aa26-46e2-bea7-8098b47fffc4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:26 crc kubenswrapper[4823]: I0126 15:06:26.243682 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67795cd9-8pkcw" Jan 26 15:06:26 crc kubenswrapper[4823]: I0126 15:06:26.243668 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67795cd9-8pkcw" event={"ID":"536722ad-aa26-46e2-bea7-8098b47fffc4","Type":"ContainerDied","Data":"92b6c5df3045264060baea3ec6b4e839be0b09a92b1f8f6c406fda287272bfac"} Jan 26 15:06:26 crc kubenswrapper[4823]: I0126 15:06:26.243829 4823 scope.go:117] "RemoveContainer" containerID="8aeaf5bd8749d11ef463f494123b9316b37b8c8d19af48559fb5775dab803a39" Jan 26 15:06:26 crc kubenswrapper[4823]: I0126 15:06:26.251883 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" event={"ID":"bfb8b15e-b589-4777-ab0d-703cba188a74","Type":"ContainerStarted","Data":"3b9225fb996b43c0f44eef1ec3f8b759269219172406a6c92d4fb6fd03b0c96b"} Jan 26 15:06:26 crc kubenswrapper[4823]: I0126 15:06:26.252127 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:26 crc kubenswrapper[4823]: I0126 15:06:26.254115 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56c59d768f-mmj9s" event={"ID":"7826fb1c-12c8-42a5-9431-7d22aa0ea308","Type":"ContainerStarted","Data":"aee41cb976603807b906947dd09406d73036fe70c70ab0495f1d21bd78e37c29"} Jan 26 15:06:26 crc kubenswrapper[4823]: I0126 15:06:26.282779 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" podStartSLOduration=4.282760763 podStartE2EDuration="4.282760763s" podCreationTimestamp="2026-01-26 15:06:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:06:26.28228512 +0000 UTC m=+1182.967748225" watchObservedRunningTime="2026-01-26 15:06:26.282760763 +0000 UTC m=+1182.968223868" Jan 26 15:06:26 crc kubenswrapper[4823]: I0126 15:06:26.347666 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-8pkcw"] Jan 26 15:06:26 crc kubenswrapper[4823]: I0126 15:06:26.361354 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67795cd9-8pkcw"] Jan 26 15:06:27 crc kubenswrapper[4823]: I0126 15:06:27.589329 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536722ad-aa26-46e2-bea7-8098b47fffc4" path="/var/lib/kubelet/pods/536722ad-aa26-46e2-bea7-8098b47fffc4/volumes" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.397760 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85c7bb7457-vtqk5"] Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.439708 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75dbc957cb-ckfwc"] Jan 26 15:06:31 crc kubenswrapper[4823]: E0126 15:06:31.440205 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="536722ad-aa26-46e2-bea7-8098b47fffc4" containerName="init" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.440229 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="536722ad-aa26-46e2-bea7-8098b47fffc4" containerName="init" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.440511 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="536722ad-aa26-46e2-bea7-8098b47fffc4" containerName="init" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.441566 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.446780 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.460681 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75dbc957cb-ckfwc"] Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.501609 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f681696-41f2-470d-805c-5b70ea803542-config-data\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.501684 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czmfm\" (UniqueName: \"kubernetes.io/projected/4f681696-41f2-470d-805c-5b70ea803542-kube-api-access-czmfm\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.501728 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-horizon-secret-key\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.501762 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-horizon-tls-certs\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.501784 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f681696-41f2-470d-805c-5b70ea803542-scripts\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.501811 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-combined-ca-bundle\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.501882 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f681696-41f2-470d-805c-5b70ea803542-logs\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.531486 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56c59d768f-mmj9s"] Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.591862 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6c6cbf99d4-vbwh8"] Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.593734 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.603440 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f681696-41f2-470d-805c-5b70ea803542-logs\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.603551 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f681696-41f2-470d-805c-5b70ea803542-config-data\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.603589 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czmfm\" (UniqueName: \"kubernetes.io/projected/4f681696-41f2-470d-805c-5b70ea803542-kube-api-access-czmfm\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.603627 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-horizon-secret-key\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.603666 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-horizon-tls-certs\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.603694 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f681696-41f2-470d-805c-5b70ea803542-scripts\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.603733 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-combined-ca-bundle\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.604338 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f681696-41f2-470d-805c-5b70ea803542-logs\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.605718 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c6cbf99d4-vbwh8"] Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.611656 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f681696-41f2-470d-805c-5b70ea803542-scripts\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.613267 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f681696-41f2-470d-805c-5b70ea803542-config-data\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.635650 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-horizon-secret-key\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.642414 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czmfm\" (UniqueName: \"kubernetes.io/projected/4f681696-41f2-470d-805c-5b70ea803542-kube-api-access-czmfm\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.646666 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-horizon-tls-certs\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.657092 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-combined-ca-bundle\") pod \"horizon-75dbc957cb-ckfwc\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.711110 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn845\" (UniqueName: \"kubernetes.io/projected/4c60001f-e43a-4559-ba67-134f88a3f2a6-kube-api-access-hn845\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.711249 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c60001f-e43a-4559-ba67-134f88a3f2a6-logs\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.712571 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c60001f-e43a-4559-ba67-134f88a3f2a6-scripts\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.712759 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c60001f-e43a-4559-ba67-134f88a3f2a6-horizon-tls-certs\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.713777 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c60001f-e43a-4559-ba67-134f88a3f2a6-combined-ca-bundle\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.714110 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4c60001f-e43a-4559-ba67-134f88a3f2a6-horizon-secret-key\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.714187 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c60001f-e43a-4559-ba67-134f88a3f2a6-config-data\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.791246 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.818594 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c60001f-e43a-4559-ba67-134f88a3f2a6-horizon-tls-certs\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.818712 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c60001f-e43a-4559-ba67-134f88a3f2a6-combined-ca-bundle\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.818795 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4c60001f-e43a-4559-ba67-134f88a3f2a6-horizon-secret-key\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.818831 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c60001f-e43a-4559-ba67-134f88a3f2a6-config-data\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.818881 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn845\" (UniqueName: \"kubernetes.io/projected/4c60001f-e43a-4559-ba67-134f88a3f2a6-kube-api-access-hn845\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.818933 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c60001f-e43a-4559-ba67-134f88a3f2a6-logs\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.819013 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c60001f-e43a-4559-ba67-134f88a3f2a6-scripts\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.822308 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c60001f-e43a-4559-ba67-134f88a3f2a6-logs\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.822449 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c60001f-e43a-4559-ba67-134f88a3f2a6-scripts\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.823986 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4c60001f-e43a-4559-ba67-134f88a3f2a6-horizon-secret-key\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.824926 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4c60001f-e43a-4559-ba67-134f88a3f2a6-config-data\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.825176 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c60001f-e43a-4559-ba67-134f88a3f2a6-combined-ca-bundle\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.832064 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c60001f-e43a-4559-ba67-134f88a3f2a6-horizon-tls-certs\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:31 crc kubenswrapper[4823]: I0126 15:06:31.842930 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn845\" (UniqueName: \"kubernetes.io/projected/4c60001f-e43a-4559-ba67-134f88a3f2a6-kube-api-access-hn845\") pod \"horizon-6c6cbf99d4-vbwh8\" (UID: \"4c60001f-e43a-4559-ba67-134f88a3f2a6\") " pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:32 crc kubenswrapper[4823]: I0126 15:06:32.040787 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:06:32 crc kubenswrapper[4823]: I0126 15:06:32.350262 4823 generic.go:334] "Generic (PLEG): container finished" podID="3121c402-5e42-483c-8258-6683404f5f3e" containerID="f51ddb6c216396b0a1e52278ec724444b12385fd1f9fff1815b49a839766b6a1" exitCode=0 Jan 26 15:06:32 crc kubenswrapper[4823]: I0126 15:06:32.350333 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lsbqj" event={"ID":"3121c402-5e42-483c-8258-6683404f5f3e","Type":"ContainerDied","Data":"f51ddb6c216396b0a1e52278ec724444b12385fd1f9fff1815b49a839766b6a1"} Jan 26 15:06:33 crc kubenswrapper[4823]: I0126 15:06:33.500708 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:06:33 crc kubenswrapper[4823]: I0126 15:06:33.577587 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hv5xq"] Jan 26 15:06:33 crc kubenswrapper[4823]: I0126 15:06:33.577843 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-hv5xq" podUID="f90430a4-242c-43dd-9c41-11e67170985a" containerName="dnsmasq-dns" containerID="cri-o://3fd40d56b0675043d40ef452c1e772849efead7fd3a6d7cdbf8fe9cb209af31c" gracePeriod=10 Jan 26 15:06:34 crc kubenswrapper[4823]: I0126 15:06:34.371135 4823 generic.go:334] "Generic (PLEG): container finished" podID="f90430a4-242c-43dd-9c41-11e67170985a" containerID="3fd40d56b0675043d40ef452c1e772849efead7fd3a6d7cdbf8fe9cb209af31c" exitCode=0 Jan 26 15:06:34 crc kubenswrapper[4823]: I0126 15:06:34.371215 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hv5xq" event={"ID":"f90430a4-242c-43dd-9c41-11e67170985a","Type":"ContainerDied","Data":"3fd40d56b0675043d40ef452c1e772849efead7fd3a6d7cdbf8fe9cb209af31c"} Jan 26 15:06:34 crc kubenswrapper[4823]: I0126 15:06:34.508471 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:06:34 crc kubenswrapper[4823]: I0126 15:06:34.508555 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:06:38 crc kubenswrapper[4823]: I0126 15:06:38.275384 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-hv5xq" podUID="f90430a4-242c-43dd-9c41-11e67170985a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 26 15:06:43 crc kubenswrapper[4823]: I0126 15:06:43.275121 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-hv5xq" podUID="f90430a4-242c-43dd-9c41-11e67170985a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 26 15:06:43 crc kubenswrapper[4823]: E0126 15:06:43.997083 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 26 15:06:43 crc kubenswrapper[4823]: E0126 15:06:43.997707 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd5h678h64h58fh67bh646h9dhb7h95h659h98h9ch84h685h7fh6ch66bh656h5f4h687h565h64h54h695hf9h96h557h55dh5d8h549h9chffq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9sbhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-85c7bb7457-vtqk5_openstack(2782d482-e2f7-446f-86b1-d9e0933ed53b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:06:44 crc kubenswrapper[4823]: E0126 15:06:44.003195 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-85c7bb7457-vtqk5" podUID="2782d482-e2f7-446f-86b1-d9e0933ed53b" Jan 26 15:06:44 crc kubenswrapper[4823]: E0126 15:06:44.006437 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 26 15:06:44 crc kubenswrapper[4823]: E0126 15:06:44.006724 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ncbh79h676h85h5c4h5d7h59h79hd9h55ch5b9h5bdh646hc9h5bbh5fdhbdh579h56h5cchb8h77h5c4h674h5dbh5b7h5d7h55bh646h549h569h696q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9qmsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-796c7d56c9-6xg5x_openstack(48ff8239-374f-4321-ad90-a17b01a30a72): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:06:44 crc kubenswrapper[4823]: E0126 15:06:44.012809 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-796c7d56c9-6xg5x" podUID="48ff8239-374f-4321-ad90-a17b01a30a72" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.077048 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.125323 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-fernet-keys\") pod \"3121c402-5e42-483c-8258-6683404f5f3e\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.125483 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfxxr\" (UniqueName: \"kubernetes.io/projected/3121c402-5e42-483c-8258-6683404f5f3e-kube-api-access-kfxxr\") pod \"3121c402-5e42-483c-8258-6683404f5f3e\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.125512 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-config-data\") pod \"3121c402-5e42-483c-8258-6683404f5f3e\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.125580 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-credential-keys\") pod \"3121c402-5e42-483c-8258-6683404f5f3e\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.125673 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-scripts\") pod \"3121c402-5e42-483c-8258-6683404f5f3e\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.125697 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-combined-ca-bundle\") pod \"3121c402-5e42-483c-8258-6683404f5f3e\" (UID: \"3121c402-5e42-483c-8258-6683404f5f3e\") " Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.134210 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3121c402-5e42-483c-8258-6683404f5f3e-kube-api-access-kfxxr" (OuterVolumeSpecName: "kube-api-access-kfxxr") pod "3121c402-5e42-483c-8258-6683404f5f3e" (UID: "3121c402-5e42-483c-8258-6683404f5f3e"). InnerVolumeSpecName "kube-api-access-kfxxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.134530 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3121c402-5e42-483c-8258-6683404f5f3e" (UID: "3121c402-5e42-483c-8258-6683404f5f3e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.135100 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3121c402-5e42-483c-8258-6683404f5f3e" (UID: "3121c402-5e42-483c-8258-6683404f5f3e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.137804 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-scripts" (OuterVolumeSpecName: "scripts") pod "3121c402-5e42-483c-8258-6683404f5f3e" (UID: "3121c402-5e42-483c-8258-6683404f5f3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.153613 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-config-data" (OuterVolumeSpecName: "config-data") pod "3121c402-5e42-483c-8258-6683404f5f3e" (UID: "3121c402-5e42-483c-8258-6683404f5f3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.161988 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3121c402-5e42-483c-8258-6683404f5f3e" (UID: "3121c402-5e42-483c-8258-6683404f5f3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.228344 4823 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.228414 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.228472 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.228502 4823 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.228540 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfxxr\" (UniqueName: \"kubernetes.io/projected/3121c402-5e42-483c-8258-6683404f5f3e-kube-api-access-kfxxr\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.228554 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3121c402-5e42-483c-8258-6683404f5f3e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.502833 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lsbqj" event={"ID":"3121c402-5e42-483c-8258-6683404f5f3e","Type":"ContainerDied","Data":"45609188fb6f3e9f7b644220d0f96d44b50a32d7ca98e30077d7afb33fa09acf"} Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.502891 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45609188fb6f3e9f7b644220d0f96d44b50a32d7ca98e30077d7afb33fa09acf" Jan 26 15:06:44 crc kubenswrapper[4823]: I0126 15:06:44.502948 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lsbqj" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.184345 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lsbqj"] Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.195624 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lsbqj"] Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.267948 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lnlnm"] Jan 26 15:06:45 crc kubenswrapper[4823]: E0126 15:06:45.268633 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3121c402-5e42-483c-8258-6683404f5f3e" containerName="keystone-bootstrap" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.268665 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3121c402-5e42-483c-8258-6683404f5f3e" containerName="keystone-bootstrap" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.268918 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3121c402-5e42-483c-8258-6683404f5f3e" containerName="keystone-bootstrap" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.269817 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.273112 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.273246 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vkwn7" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.273116 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.273311 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.273862 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.277954 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lnlnm"] Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.361761 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-credential-keys\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.361822 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-fernet-keys\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.361863 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-combined-ca-bundle\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.361883 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-scripts\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.361917 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-config-data\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.361948 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxmkk\" (UniqueName: \"kubernetes.io/projected/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-kube-api-access-xxmkk\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.464421 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-credential-keys\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.464489 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-fernet-keys\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.464534 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-combined-ca-bundle\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.464567 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-scripts\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.464616 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-config-data\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.464665 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxmkk\" (UniqueName: \"kubernetes.io/projected/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-kube-api-access-xxmkk\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.470868 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-fernet-keys\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.471741 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-scripts\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.473040 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-combined-ca-bundle\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.477096 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-config-data\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.480869 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-credential-keys\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.482078 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxmkk\" (UniqueName: \"kubernetes.io/projected/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-kube-api-access-xxmkk\") pod \"keystone-bootstrap-lnlnm\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.592918 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3121c402-5e42-483c-8258-6683404f5f3e" path="/var/lib/kubelet/pods/3121c402-5e42-483c-8258-6683404f5f3e/volumes" Jan 26 15:06:45 crc kubenswrapper[4823]: I0126 15:06:45.597512 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:06:48 crc kubenswrapper[4823]: I0126 15:06:48.275596 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-hv5xq" podUID="f90430a4-242c-43dd-9c41-11e67170985a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Jan 26 15:06:48 crc kubenswrapper[4823]: I0126 15:06:48.276250 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:06:53 crc kubenswrapper[4823]: E0126 15:06:53.311102 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 26 15:06:53 crc kubenswrapper[4823]: E0126 15:06:53.312841 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n688h5cch5f8h589h654h8dh66fh5bbh589h9chd5h686h577hddhdfh677h684h5cdhcch56dh5bh65ch669hdbh564h54bh56bh547hbfh4h595h57bq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-694zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7565d18e-0ce0-432d-ab8f-10c43561b9f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:06:53 crc kubenswrapper[4823]: E0126 15:06:53.817409 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 26 15:06:53 crc kubenswrapper[4823]: E0126 15:06:53.817690 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lc6bv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-nn9br_openstack(2dcb08f2-c175-4602-9a45-dad635436a22): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:06:53 crc kubenswrapper[4823]: E0126 15:06:53.819206 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-nn9br" podUID="2dcb08f2-c175-4602-9a45-dad635436a22" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.917546 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.923113 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.967036 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2782d482-e2f7-446f-86b1-d9e0933ed53b-config-data\") pod \"2782d482-e2f7-446f-86b1-d9e0933ed53b\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.967221 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48ff8239-374f-4321-ad90-a17b01a30a72-logs\") pod \"48ff8239-374f-4321-ad90-a17b01a30a72\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.967256 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qmsg\" (UniqueName: \"kubernetes.io/projected/48ff8239-374f-4321-ad90-a17b01a30a72-kube-api-access-9qmsg\") pod \"48ff8239-374f-4321-ad90-a17b01a30a72\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.967335 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2782d482-e2f7-446f-86b1-d9e0933ed53b-scripts\") pod \"2782d482-e2f7-446f-86b1-d9e0933ed53b\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.967486 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2782d482-e2f7-446f-86b1-d9e0933ed53b-logs\") pod \"2782d482-e2f7-446f-86b1-d9e0933ed53b\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.967602 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/48ff8239-374f-4321-ad90-a17b01a30a72-horizon-secret-key\") pod \"48ff8239-374f-4321-ad90-a17b01a30a72\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.967747 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2782d482-e2f7-446f-86b1-d9e0933ed53b-horizon-secret-key\") pod \"2782d482-e2f7-446f-86b1-d9e0933ed53b\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.967918 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48ff8239-374f-4321-ad90-a17b01a30a72-scripts\") pod \"48ff8239-374f-4321-ad90-a17b01a30a72\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.968025 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9sbhj\" (UniqueName: \"kubernetes.io/projected/2782d482-e2f7-446f-86b1-d9e0933ed53b-kube-api-access-9sbhj\") pod \"2782d482-e2f7-446f-86b1-d9e0933ed53b\" (UID: \"2782d482-e2f7-446f-86b1-d9e0933ed53b\") " Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.968174 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48ff8239-374f-4321-ad90-a17b01a30a72-config-data\") pod \"48ff8239-374f-4321-ad90-a17b01a30a72\" (UID: \"48ff8239-374f-4321-ad90-a17b01a30a72\") " Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.968523 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48ff8239-374f-4321-ad90-a17b01a30a72-logs" (OuterVolumeSpecName: "logs") pod "48ff8239-374f-4321-ad90-a17b01a30a72" (UID: "48ff8239-374f-4321-ad90-a17b01a30a72"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.969072 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48ff8239-374f-4321-ad90-a17b01a30a72-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.969298 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2782d482-e2f7-446f-86b1-d9e0933ed53b-config-data" (OuterVolumeSpecName: "config-data") pod "2782d482-e2f7-446f-86b1-d9e0933ed53b" (UID: "2782d482-e2f7-446f-86b1-d9e0933ed53b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.970344 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ff8239-374f-4321-ad90-a17b01a30a72-config-data" (OuterVolumeSpecName: "config-data") pod "48ff8239-374f-4321-ad90-a17b01a30a72" (UID: "48ff8239-374f-4321-ad90-a17b01a30a72"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.970675 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2782d482-e2f7-446f-86b1-d9e0933ed53b-scripts" (OuterVolumeSpecName: "scripts") pod "2782d482-e2f7-446f-86b1-d9e0933ed53b" (UID: "2782d482-e2f7-446f-86b1-d9e0933ed53b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.970994 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2782d482-e2f7-446f-86b1-d9e0933ed53b-logs" (OuterVolumeSpecName: "logs") pod "2782d482-e2f7-446f-86b1-d9e0933ed53b" (UID: "2782d482-e2f7-446f-86b1-d9e0933ed53b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.971174 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ff8239-374f-4321-ad90-a17b01a30a72-scripts" (OuterVolumeSpecName: "scripts") pod "48ff8239-374f-4321-ad90-a17b01a30a72" (UID: "48ff8239-374f-4321-ad90-a17b01a30a72"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.976556 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48ff8239-374f-4321-ad90-a17b01a30a72-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "48ff8239-374f-4321-ad90-a17b01a30a72" (UID: "48ff8239-374f-4321-ad90-a17b01a30a72"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.977248 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ff8239-374f-4321-ad90-a17b01a30a72-kube-api-access-9qmsg" (OuterVolumeSpecName: "kube-api-access-9qmsg") pod "48ff8239-374f-4321-ad90-a17b01a30a72" (UID: "48ff8239-374f-4321-ad90-a17b01a30a72"). InnerVolumeSpecName "kube-api-access-9qmsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.978228 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2782d482-e2f7-446f-86b1-d9e0933ed53b-kube-api-access-9sbhj" (OuterVolumeSpecName: "kube-api-access-9sbhj") pod "2782d482-e2f7-446f-86b1-d9e0933ed53b" (UID: "2782d482-e2f7-446f-86b1-d9e0933ed53b"). InnerVolumeSpecName "kube-api-access-9sbhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:53 crc kubenswrapper[4823]: I0126 15:06:53.989148 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2782d482-e2f7-446f-86b1-d9e0933ed53b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "2782d482-e2f7-446f-86b1-d9e0933ed53b" (UID: "2782d482-e2f7-446f-86b1-d9e0933ed53b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.071388 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2782d482-e2f7-446f-86b1-d9e0933ed53b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.071431 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qmsg\" (UniqueName: \"kubernetes.io/projected/48ff8239-374f-4321-ad90-a17b01a30a72-kube-api-access-9qmsg\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.071447 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2782d482-e2f7-446f-86b1-d9e0933ed53b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.071455 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2782d482-e2f7-446f-86b1-d9e0933ed53b-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.071465 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/48ff8239-374f-4321-ad90-a17b01a30a72-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.071475 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/2782d482-e2f7-446f-86b1-d9e0933ed53b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.071483 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48ff8239-374f-4321-ad90-a17b01a30a72-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.071491 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9sbhj\" (UniqueName: \"kubernetes.io/projected/2782d482-e2f7-446f-86b1-d9e0933ed53b-kube-api-access-9sbhj\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.071501 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48ff8239-374f-4321-ad90-a17b01a30a72-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.624850 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-796c7d56c9-6xg5x" event={"ID":"48ff8239-374f-4321-ad90-a17b01a30a72","Type":"ContainerDied","Data":"a0a3f95aa6df6d5d09c392c25ce3ef506d6829f69caf4e73135a83749fe4b28c"} Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.625002 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-796c7d56c9-6xg5x" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.638806 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-85c7bb7457-vtqk5" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.638879 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-85c7bb7457-vtqk5" event={"ID":"2782d482-e2f7-446f-86b1-d9e0933ed53b","Type":"ContainerDied","Data":"6fa160563fb2410737ce7d68319101c5b5c04804341b73039ba9426692da4120"} Jan 26 15:06:54 crc kubenswrapper[4823]: E0126 15:06:54.642786 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-nn9br" podUID="2dcb08f2-c175-4602-9a45-dad635436a22" Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.715499 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-85c7bb7457-vtqk5"] Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.726462 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-85c7bb7457-vtqk5"] Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.759542 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-796c7d56c9-6xg5x"] Jan 26 15:06:54 crc kubenswrapper[4823]: I0126 15:06:54.771480 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-796c7d56c9-6xg5x"] Jan 26 15:06:55 crc kubenswrapper[4823]: E0126 15:06:55.168074 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 26 15:06:55 crc kubenswrapper[4823]: E0126 15:06:55.168243 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-brbn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-qx574_openstack(3ae97ed0-0d88-4581-ab58-b4a97f8947ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:06:55 crc kubenswrapper[4823]: E0126 15:06:55.170299 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-qx574" podUID="3ae97ed0-0d88-4581-ab58-b4a97f8947ad" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.272028 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.296860 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-ovsdbserver-nb\") pod \"f90430a4-242c-43dd-9c41-11e67170985a\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.298151 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-dns-svc\") pod \"f90430a4-242c-43dd-9c41-11e67170985a\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.298196 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-config\") pod \"f90430a4-242c-43dd-9c41-11e67170985a\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.308183 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-ovsdbserver-sb\") pod \"f90430a4-242c-43dd-9c41-11e67170985a\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.308221 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2tff\" (UniqueName: \"kubernetes.io/projected/f90430a4-242c-43dd-9c41-11e67170985a-kube-api-access-p2tff\") pod \"f90430a4-242c-43dd-9c41-11e67170985a\" (UID: \"f90430a4-242c-43dd-9c41-11e67170985a\") " Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.322664 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f90430a4-242c-43dd-9c41-11e67170985a-kube-api-access-p2tff" (OuterVolumeSpecName: "kube-api-access-p2tff") pod "f90430a4-242c-43dd-9c41-11e67170985a" (UID: "f90430a4-242c-43dd-9c41-11e67170985a"). InnerVolumeSpecName "kube-api-access-p2tff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.392551 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f90430a4-242c-43dd-9c41-11e67170985a" (UID: "f90430a4-242c-43dd-9c41-11e67170985a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.399273 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f90430a4-242c-43dd-9c41-11e67170985a" (UID: "f90430a4-242c-43dd-9c41-11e67170985a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.411406 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2tff\" (UniqueName: \"kubernetes.io/projected/f90430a4-242c-43dd-9c41-11e67170985a-kube-api-access-p2tff\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.411448 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.411462 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.415868 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f90430a4-242c-43dd-9c41-11e67170985a" (UID: "f90430a4-242c-43dd-9c41-11e67170985a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.417053 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-config" (OuterVolumeSpecName: "config") pod "f90430a4-242c-43dd-9c41-11e67170985a" (UID: "f90430a4-242c-43dd-9c41-11e67170985a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.513872 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.513941 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f90430a4-242c-43dd-9c41-11e67170985a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.574002 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2782d482-e2f7-446f-86b1-d9e0933ed53b" path="/var/lib/kubelet/pods/2782d482-e2f7-446f-86b1-d9e0933ed53b/volumes" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.575151 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ff8239-374f-4321-ad90-a17b01a30a72" path="/var/lib/kubelet/pods/48ff8239-374f-4321-ad90-a17b01a30a72/volumes" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.665528 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-hv5xq" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.666069 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-hv5xq" event={"ID":"f90430a4-242c-43dd-9c41-11e67170985a","Type":"ContainerDied","Data":"8239b38be1064618b029a12aebb1831fb6089b8cee688bde25a995f1d8cdf49f"} Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.666115 4823 scope.go:117] "RemoveContainer" containerID="3fd40d56b0675043d40ef452c1e772849efead7fd3a6d7cdbf8fe9cb209af31c" Jan 26 15:06:55 crc kubenswrapper[4823]: E0126 15:06:55.667579 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-qx574" podUID="3ae97ed0-0d88-4581-ab58-b4a97f8947ad" Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.670344 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75dbc957cb-ckfwc"] Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.680228 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6c6cbf99d4-vbwh8"] Jan 26 15:06:55 crc kubenswrapper[4823]: W0126 15:06:55.705483 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c60001f_e43a_4559_ba67_134f88a3f2a6.slice/crio-c34a62e5a760bee4a3793533746b4a36fd19a640ecff4cc04364bb1c0b9671a0 WatchSource:0}: Error finding container c34a62e5a760bee4a3793533746b4a36fd19a640ecff4cc04364bb1c0b9671a0: Status 404 returned error can't find the container with id c34a62e5a760bee4a3793533746b4a36fd19a640ecff4cc04364bb1c0b9671a0 Jan 26 15:06:55 crc kubenswrapper[4823]: W0126 15:06:55.710026 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f681696_41f2_470d_805c_5b70ea803542.slice/crio-eec9d08fba070fb300eb8d5a35356cbd3fd818901511c244c830b8f3c1e3d9fa WatchSource:0}: Error finding container eec9d08fba070fb300eb8d5a35356cbd3fd818901511c244c830b8f3c1e3d9fa: Status 404 returned error can't find the container with id eec9d08fba070fb300eb8d5a35356cbd3fd818901511c244c830b8f3c1e3d9fa Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.720551 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hv5xq"] Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.727213 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-hv5xq"] Jan 26 15:06:55 crc kubenswrapper[4823]: I0126 15:06:55.727839 4823 scope.go:117] "RemoveContainer" containerID="33f4543b4a8e81fd49d7e7ef8853a8d8cb9d51e063b856739767dc24ef270b36" Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.155224 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lnlnm"] Jan 26 15:06:57 crc kubenswrapper[4823]: W0126 15:06:56.155975 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod71eea416_ec1b_47dd_a6e2_b56ebb89a07f.slice/crio-b2b0a31af186c1c6e8ed57b73f2f2354c4300ae97b02f02a0ab4b0283af6ea64 WatchSource:0}: Error finding container b2b0a31af186c1c6e8ed57b73f2f2354c4300ae97b02f02a0ab4b0283af6ea64: Status 404 returned error can't find the container with id b2b0a31af186c1c6e8ed57b73f2f2354c4300ae97b02f02a0ab4b0283af6ea64 Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.675103 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c6cbf99d4-vbwh8" event={"ID":"4c60001f-e43a-4559-ba67-134f88a3f2a6","Type":"ContainerStarted","Data":"72d632018f95d9b59149f0d6f913d89c33aed9e37c1c88770f81b86d3a853c06"} Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.675765 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c6cbf99d4-vbwh8" event={"ID":"4c60001f-e43a-4559-ba67-134f88a3f2a6","Type":"ContainerStarted","Data":"c34a62e5a760bee4a3793533746b4a36fd19a640ecff4cc04364bb1c0b9671a0"} Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.676680 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lnlnm" event={"ID":"71eea416-ec1b-47dd-a6e2-b56ebb89a07f","Type":"ContainerStarted","Data":"ddae000bdecc60171ac85cea63e209b0ecfc8031aaca515b5810843d2659ff34"} Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.676707 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lnlnm" event={"ID":"71eea416-ec1b-47dd-a6e2-b56ebb89a07f","Type":"ContainerStarted","Data":"b2b0a31af186c1c6e8ed57b73f2f2354c4300ae97b02f02a0ab4b0283af6ea64"} Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.679837 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fs2xh" event={"ID":"c5c91a8b-7077-4583-aa19-595408fb9003","Type":"ContainerStarted","Data":"02d0134d4ecb0e0d29dd85b9d5c98ec01b3ac4c257702bf1299e26fd6e12286c"} Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.687936 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7565d18e-0ce0-432d-ab8f-10c43561b9f8","Type":"ContainerStarted","Data":"a4166120b7b3bc522ddea0b888416fef22a7fa63ef2caf28bfd0dcd259400da4"} Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.703242 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lnlnm" podStartSLOduration=11.703215882 podStartE2EDuration="11.703215882s" podCreationTimestamp="2026-01-26 15:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:06:56.702568894 +0000 UTC m=+1213.388031999" watchObservedRunningTime="2026-01-26 15:06:56.703215882 +0000 UTC m=+1213.388678987" Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.707940 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dbc957cb-ckfwc" event={"ID":"4f681696-41f2-470d-805c-5b70ea803542","Type":"ContainerStarted","Data":"ae177c04f8f1dcb05bd9666d753f2e2bd9fda6779e26a6637805474c893cb5fe"} Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.707989 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dbc957cb-ckfwc" event={"ID":"4f681696-41f2-470d-805c-5b70ea803542","Type":"ContainerStarted","Data":"d81b813c0f64e4fbf4e118c4fde902c88138d105ee75288b1a27977e3f090c94"} Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.707998 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dbc957cb-ckfwc" event={"ID":"4f681696-41f2-470d-805c-5b70ea803542","Type":"ContainerStarted","Data":"eec9d08fba070fb300eb8d5a35356cbd3fd818901511c244c830b8f3c1e3d9fa"} Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.712319 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56c59d768f-mmj9s" event={"ID":"7826fb1c-12c8-42a5-9431-7d22aa0ea308","Type":"ContainerStarted","Data":"085b93c655b76f2d33410a85aa5aa5f9d70a889a0c0a1851f70c687913c24d64"} Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.712399 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56c59d768f-mmj9s" event={"ID":"7826fb1c-12c8-42a5-9431-7d22aa0ea308","Type":"ContainerStarted","Data":"b287f0bdeb2297f752e9a2d6164fd625a912e48bceae83a2756d0cfabcfb8af0"} Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.712591 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56c59d768f-mmj9s" podUID="7826fb1c-12c8-42a5-9431-7d22aa0ea308" containerName="horizon-log" containerID="cri-o://b287f0bdeb2297f752e9a2d6164fd625a912e48bceae83a2756d0cfabcfb8af0" gracePeriod=30 Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.713077 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56c59d768f-mmj9s" podUID="7826fb1c-12c8-42a5-9431-7d22aa0ea308" containerName="horizon" containerID="cri-o://085b93c655b76f2d33410a85aa5aa5f9d70a889a0c0a1851f70c687913c24d64" gracePeriod=30 Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.721815 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-fs2xh" podStartSLOduration=5.325176086 podStartE2EDuration="34.721792059s" podCreationTimestamp="2026-01-26 15:06:22 +0000 UTC" firstStartedPulling="2026-01-26 15:06:24.426921114 +0000 UTC m=+1181.112384219" lastFinishedPulling="2026-01-26 15:06:53.823537087 +0000 UTC m=+1210.509000192" observedRunningTime="2026-01-26 15:06:56.721397029 +0000 UTC m=+1213.406860134" watchObservedRunningTime="2026-01-26 15:06:56.721792059 +0000 UTC m=+1213.407255164" Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.796348 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-56c59d768f-mmj9s" podStartSLOduration=4.9075231089999996 podStartE2EDuration="32.796114497s" podCreationTimestamp="2026-01-26 15:06:24 +0000 UTC" firstStartedPulling="2026-01-26 15:06:25.934923728 +0000 UTC m=+1182.620386833" lastFinishedPulling="2026-01-26 15:06:53.823515116 +0000 UTC m=+1210.508978221" observedRunningTime="2026-01-26 15:06:56.742932606 +0000 UTC m=+1213.428395701" watchObservedRunningTime="2026-01-26 15:06:56.796114497 +0000 UTC m=+1213.481577602" Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:56.816722 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-75dbc957cb-ckfwc" podStartSLOduration=25.81669372 podStartE2EDuration="25.81669372s" podCreationTimestamp="2026-01-26 15:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:06:56.768963006 +0000 UTC m=+1213.454426131" watchObservedRunningTime="2026-01-26 15:06:56.81669372 +0000 UTC m=+1213.502156825" Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:57.579405 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f90430a4-242c-43dd-9c41-11e67170985a" path="/var/lib/kubelet/pods/f90430a4-242c-43dd-9c41-11e67170985a/volumes" Jan 26 15:06:57 crc kubenswrapper[4823]: I0126 15:06:57.737193 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6c6cbf99d4-vbwh8" event={"ID":"4c60001f-e43a-4559-ba67-134f88a3f2a6","Type":"ContainerStarted","Data":"dcf5c7a2c971fc311c401a7076cbb14d595fcd2516e154311f02b52129480da7"} Jan 26 15:06:58 crc kubenswrapper[4823]: I0126 15:06:58.274334 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-hv5xq" podUID="f90430a4-242c-43dd-9c41-11e67170985a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: i/o timeout" Jan 26 15:07:01 crc kubenswrapper[4823]: I0126 15:07:01.792856 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:07:01 crc kubenswrapper[4823]: I0126 15:07:01.793809 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:07:02 crc kubenswrapper[4823]: I0126 15:07:02.041237 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:07:02 crc kubenswrapper[4823]: I0126 15:07:02.041859 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:07:04 crc kubenswrapper[4823]: I0126 15:07:04.508691 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:07:04 crc kubenswrapper[4823]: I0126 15:07:04.509153 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:07:04 crc kubenswrapper[4823]: I0126 15:07:04.509212 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 15:07:04 crc kubenswrapper[4823]: I0126 15:07:04.510274 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6349657ac17d7db3f38b64f373c6e824084e4cf157cbb0ce8765094b3f648c48"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:07:04 crc kubenswrapper[4823]: I0126 15:07:04.510346 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://6349657ac17d7db3f38b64f373c6e824084e4cf157cbb0ce8765094b3f648c48" gracePeriod=600 Jan 26 15:07:05 crc kubenswrapper[4823]: I0126 15:07:05.321555 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:07:05 crc kubenswrapper[4823]: I0126 15:07:05.590410 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6c6cbf99d4-vbwh8" podStartSLOduration=34.590346118 podStartE2EDuration="34.590346118s" podCreationTimestamp="2026-01-26 15:06:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:06:57.763693939 +0000 UTC m=+1214.449157044" watchObservedRunningTime="2026-01-26 15:07:05.590346118 +0000 UTC m=+1222.275809243" Jan 26 15:07:06 crc kubenswrapper[4823]: I0126 15:07:06.850333 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="6349657ac17d7db3f38b64f373c6e824084e4cf157cbb0ce8765094b3f648c48" exitCode=0 Jan 26 15:07:06 crc kubenswrapper[4823]: I0126 15:07:06.850426 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"6349657ac17d7db3f38b64f373c6e824084e4cf157cbb0ce8765094b3f648c48"} Jan 26 15:07:06 crc kubenswrapper[4823]: I0126 15:07:06.850475 4823 scope.go:117] "RemoveContainer" containerID="ced2fb81c930871220e3d3d613f6291ccb0288d32b2aebbbf2e414d7715540a7" Jan 26 15:07:07 crc kubenswrapper[4823]: I0126 15:07:07.860153 4823 generic.go:334] "Generic (PLEG): container finished" podID="71eea416-ec1b-47dd-a6e2-b56ebb89a07f" containerID="ddae000bdecc60171ac85cea63e209b0ecfc8031aaca515b5810843d2659ff34" exitCode=0 Jan 26 15:07:07 crc kubenswrapper[4823]: I0126 15:07:07.860290 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lnlnm" event={"ID":"71eea416-ec1b-47dd-a6e2-b56ebb89a07f","Type":"ContainerDied","Data":"ddae000bdecc60171ac85cea63e209b0ecfc8031aaca515b5810843d2659ff34"} Jan 26 15:07:07 crc kubenswrapper[4823]: I0126 15:07:07.864139 4823 generic.go:334] "Generic (PLEG): container finished" podID="8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e" containerID="d2e0078e1fb0c6aba703a6928db4a92b7391e435673e16e0c87e303a9182265b" exitCode=0 Jan 26 15:07:07 crc kubenswrapper[4823]: I0126 15:07:07.864175 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9c2rp" event={"ID":"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e","Type":"ContainerDied","Data":"d2e0078e1fb0c6aba703a6928db4a92b7391e435673e16e0c87e303a9182265b"} Jan 26 15:07:08 crc kubenswrapper[4823]: E0126 15:07:08.721941 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Jan 26 15:07:08 crc kubenswrapper[4823]: E0126 15:07:08.722606 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-694zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7565d18e-0ce0-432d-ab8f-10c43561b9f8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:07:08 crc kubenswrapper[4823]: I0126 15:07:08.875283 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"5873fe7ad32e2369de7d83b599dba09a2b10db32679ec89fa6711c86f67ecbb2"} Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.521472 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.538975 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.689807 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-fernet-keys\") pod \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.689887 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-combined-ca-bundle\") pod \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\" (UID: \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\") " Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.689960 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-combined-ca-bundle\") pod \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.690087 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-config\") pod \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\" (UID: \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\") " Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.690129 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-scripts\") pod \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.690196 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-credential-keys\") pod \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.690252 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-config-data\") pod \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.690318 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxmkk\" (UniqueName: \"kubernetes.io/projected/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-kube-api-access-xxmkk\") pod \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\" (UID: \"71eea416-ec1b-47dd-a6e2-b56ebb89a07f\") " Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.690413 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsktx\" (UniqueName: \"kubernetes.io/projected/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-kube-api-access-zsktx\") pod \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\" (UID: \"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e\") " Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.711555 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "71eea416-ec1b-47dd-a6e2-b56ebb89a07f" (UID: "71eea416-ec1b-47dd-a6e2-b56ebb89a07f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.711713 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "71eea416-ec1b-47dd-a6e2-b56ebb89a07f" (UID: "71eea416-ec1b-47dd-a6e2-b56ebb89a07f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.711825 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-kube-api-access-zsktx" (OuterVolumeSpecName: "kube-api-access-zsktx") pod "8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e" (UID: "8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e"). InnerVolumeSpecName "kube-api-access-zsktx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.712017 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-scripts" (OuterVolumeSpecName: "scripts") pod "71eea416-ec1b-47dd-a6e2-b56ebb89a07f" (UID: "71eea416-ec1b-47dd-a6e2-b56ebb89a07f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.715952 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-kube-api-access-xxmkk" (OuterVolumeSpecName: "kube-api-access-xxmkk") pod "71eea416-ec1b-47dd-a6e2-b56ebb89a07f" (UID: "71eea416-ec1b-47dd-a6e2-b56ebb89a07f"). InnerVolumeSpecName "kube-api-access-xxmkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.720306 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-config" (OuterVolumeSpecName: "config") pod "8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e" (UID: "8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.727268 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e" (UID: "8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.745475 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-config-data" (OuterVolumeSpecName: "config-data") pod "71eea416-ec1b-47dd-a6e2-b56ebb89a07f" (UID: "71eea416-ec1b-47dd-a6e2-b56ebb89a07f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.756591 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "71eea416-ec1b-47dd-a6e2-b56ebb89a07f" (UID: "71eea416-ec1b-47dd-a6e2-b56ebb89a07f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.793336 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.793403 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxmkk\" (UniqueName: \"kubernetes.io/projected/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-kube-api-access-xxmkk\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.793418 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsktx\" (UniqueName: \"kubernetes.io/projected/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-kube-api-access-zsktx\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.793429 4823 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.793445 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.793455 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.793465 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.793475 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.793484 4823 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/71eea416-ec1b-47dd-a6e2-b56ebb89a07f-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.887340 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lnlnm" event={"ID":"71eea416-ec1b-47dd-a6e2-b56ebb89a07f","Type":"ContainerDied","Data":"b2b0a31af186c1c6e8ed57b73f2f2354c4300ae97b02f02a0ab4b0283af6ea64"} Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.887685 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2b0a31af186c1c6e8ed57b73f2f2354c4300ae97b02f02a0ab4b0283af6ea64" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.887411 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lnlnm" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.901877 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-9c2rp" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.901908 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-9c2rp" event={"ID":"8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e","Type":"ContainerDied","Data":"39f729fd03612b2b512ab8efe61c1823632c7fe62a05eed118ab9cc4938aa609"} Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.901950 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39f729fd03612b2b512ab8efe61c1823632c7fe62a05eed118ab9cc4938aa609" Jan 26 15:07:09 crc kubenswrapper[4823]: I0126 15:07:09.905103 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nn9br" event={"ID":"2dcb08f2-c175-4602-9a45-dad635436a22","Type":"ContainerStarted","Data":"31a9626d076460a91c0c8ab4199e737a293e7dd11fdfb153b6b506e88f02e14d"} Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.009530 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-nn9br" podStartSLOduration=3.121181966 podStartE2EDuration="48.009502916s" podCreationTimestamp="2026-01-26 15:06:22 +0000 UTC" firstStartedPulling="2026-01-26 15:06:24.43774654 +0000 UTC m=+1181.123209655" lastFinishedPulling="2026-01-26 15:07:09.3260675 +0000 UTC m=+1226.011530605" observedRunningTime="2026-01-26 15:07:09.945496648 +0000 UTC m=+1226.630959753" watchObservedRunningTime="2026-01-26 15:07:10.009502916 +0000 UTC m=+1226.694966021" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.040866 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-76d987df64-77wdm"] Jan 26 15:07:10 crc kubenswrapper[4823]: E0126 15:07:10.041305 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f90430a4-242c-43dd-9c41-11e67170985a" containerName="dnsmasq-dns" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.041323 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f90430a4-242c-43dd-9c41-11e67170985a" containerName="dnsmasq-dns" Jan 26 15:07:10 crc kubenswrapper[4823]: E0126 15:07:10.041337 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f90430a4-242c-43dd-9c41-11e67170985a" containerName="init" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.041344 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f90430a4-242c-43dd-9c41-11e67170985a" containerName="init" Jan 26 15:07:10 crc kubenswrapper[4823]: E0126 15:07:10.041384 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71eea416-ec1b-47dd-a6e2-b56ebb89a07f" containerName="keystone-bootstrap" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.041392 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="71eea416-ec1b-47dd-a6e2-b56ebb89a07f" containerName="keystone-bootstrap" Jan 26 15:07:10 crc kubenswrapper[4823]: E0126 15:07:10.041406 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e" containerName="neutron-db-sync" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.041412 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e" containerName="neutron-db-sync" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.041597 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="71eea416-ec1b-47dd-a6e2-b56ebb89a07f" containerName="keystone-bootstrap" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.041618 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f90430a4-242c-43dd-9c41-11e67170985a" containerName="dnsmasq-dns" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.041631 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e" containerName="neutron-db-sync" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.042262 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.048233 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76d987df64-77wdm"] Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.076397 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.076638 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.076664 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.076703 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.076676 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.076971 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-vkwn7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.104690 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-combined-ca-bundle\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.104808 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-fernet-keys\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.104873 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs6b5\" (UniqueName: \"kubernetes.io/projected/758cf2bf-d514-4a17-88e5-463286f0a3e9-kube-api-access-bs6b5\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.104908 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-scripts\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.104994 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-config-data\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.105033 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-public-tls-certs\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.105098 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-credential-keys\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.105118 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-internal-tls-certs\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.207490 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-fernet-keys\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.207568 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs6b5\" (UniqueName: \"kubernetes.io/projected/758cf2bf-d514-4a17-88e5-463286f0a3e9-kube-api-access-bs6b5\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.207593 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-scripts\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.207644 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-config-data\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.207671 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-public-tls-certs\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.207707 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-credential-keys\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.207722 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-internal-tls-certs\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.207759 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-combined-ca-bundle\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.207986 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-6xjvx"] Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.209801 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.214679 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-fernet-keys\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.219563 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-credential-keys\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.239928 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-public-tls-certs\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.240416 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-config-data\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.241218 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-combined-ca-bundle\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.241306 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-internal-tls-certs\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.241643 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-6xjvx"] Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.243301 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/758cf2bf-d514-4a17-88e5-463286f0a3e9-scripts\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.289559 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs6b5\" (UniqueName: \"kubernetes.io/projected/758cf2bf-d514-4a17-88e5-463286f0a3e9-kube-api-access-bs6b5\") pod \"keystone-76d987df64-77wdm\" (UID: \"758cf2bf-d514-4a17-88e5-463286f0a3e9\") " pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.310442 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.310537 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-config\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.310563 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4v8f\" (UniqueName: \"kubernetes.io/projected/5fff24cc-23b2-48e1-af92-218fefa1ff89-kube-api-access-t4v8f\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.310601 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.310696 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.398862 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.413633 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.413685 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.413736 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-config\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.413761 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4v8f\" (UniqueName: \"kubernetes.io/projected/5fff24cc-23b2-48e1-af92-218fefa1ff89-kube-api-access-t4v8f\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.413800 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.414888 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.415497 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.416048 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-config\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.420148 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.448468 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4v8f\" (UniqueName: \"kubernetes.io/projected/5fff24cc-23b2-48e1-af92-218fefa1ff89-kube-api-access-t4v8f\") pod \"dnsmasq-dns-5f66db59b9-6xjvx\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.505175 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5cc598b456-74bc7"] Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.507312 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.510067 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.510400 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.510465 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.512079 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-f778b" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.515559 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8gwl\" (UniqueName: \"kubernetes.io/projected/57e802a1-56bd-42e5-b02b-15877d9a33e3-kube-api-access-p8gwl\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.515614 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-ovndb-tls-certs\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.515642 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-httpd-config\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.515662 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-config\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.515764 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-combined-ca-bundle\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.562856 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.603682 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cc598b456-74bc7"] Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.617744 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8gwl\" (UniqueName: \"kubernetes.io/projected/57e802a1-56bd-42e5-b02b-15877d9a33e3-kube-api-access-p8gwl\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.617825 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-ovndb-tls-certs\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.617849 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-httpd-config\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.617870 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-config\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.617931 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-combined-ca-bundle\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.626901 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-httpd-config\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.634387 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-combined-ca-bundle\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.644267 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-config\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.651111 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8gwl\" (UniqueName: \"kubernetes.io/projected/57e802a1-56bd-42e5-b02b-15877d9a33e3-kube-api-access-p8gwl\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.659125 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-ovndb-tls-certs\") pod \"neutron-5cc598b456-74bc7\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.850850 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.961557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qx574" event={"ID":"3ae97ed0-0d88-4581-ab58-b4a97f8947ad","Type":"ContainerStarted","Data":"f03f17b4a7717d511696f689b0841e0f3a144d1f42906d3ec3d9f58c22e254ef"} Jan 26 15:07:10 crc kubenswrapper[4823]: I0126 15:07:10.984707 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-qx574" podStartSLOduration=3.623551709 podStartE2EDuration="48.984685555s" podCreationTimestamp="2026-01-26 15:06:22 +0000 UTC" firstStartedPulling="2026-01-26 15:06:23.966190638 +0000 UTC m=+1180.651653743" lastFinishedPulling="2026-01-26 15:07:09.327324484 +0000 UTC m=+1226.012787589" observedRunningTime="2026-01-26 15:07:10.979703048 +0000 UTC m=+1227.665166153" watchObservedRunningTime="2026-01-26 15:07:10.984685555 +0000 UTC m=+1227.670148650" Jan 26 15:07:11 crc kubenswrapper[4823]: I0126 15:07:11.219192 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-76d987df64-77wdm"] Jan 26 15:07:11 crc kubenswrapper[4823]: I0126 15:07:11.435759 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-6xjvx"] Jan 26 15:07:11 crc kubenswrapper[4823]: I0126 15:07:11.853908 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-75dbc957cb-ckfwc" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Jan 26 15:07:11 crc kubenswrapper[4823]: I0126 15:07:11.980292 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76d987df64-77wdm" event={"ID":"758cf2bf-d514-4a17-88e5-463286f0a3e9","Type":"ContainerStarted","Data":"7a64251ee02651cd5c63f74fea552bf3e49af27dd06519f1ee14444fb945c670"} Jan 26 15:07:11 crc kubenswrapper[4823]: I0126 15:07:11.997278 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" event={"ID":"5fff24cc-23b2-48e1-af92-218fefa1ff89","Type":"ContainerStarted","Data":"d2154a68e2f45e463b7a77493e5ea4bd22c8ac1a459f9ec1a922e7461822610e"} Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.044654 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6c6cbf99d4-vbwh8" podUID="4c60001f-e43a-4559-ba67-134f88a3f2a6" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.866002 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-854b94d7cf-txq64"] Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.868540 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.875336 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.875349 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.879749 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-854b94d7cf-txq64"] Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.898752 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-httpd-config\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.898837 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-public-tls-certs\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.898893 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-internal-tls-certs\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.898921 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-config\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.898959 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-ovndb-tls-certs\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.898984 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-combined-ca-bundle\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:12 crc kubenswrapper[4823]: I0126 15:07:12.899249 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7lsn\" (UniqueName: \"kubernetes.io/projected/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-kube-api-access-b7lsn\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.001905 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-internal-tls-certs\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.001998 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-config\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.002059 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-ovndb-tls-certs\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.002087 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-combined-ca-bundle\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.002178 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7lsn\" (UniqueName: \"kubernetes.io/projected/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-kube-api-access-b7lsn\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.002225 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-httpd-config\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.002274 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-public-tls-certs\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.017182 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-public-tls-certs\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.017215 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-ovndb-tls-certs\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.017392 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-config\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.024226 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-combined-ca-bundle\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.025853 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-httpd-config\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.026606 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-76d987df64-77wdm" event={"ID":"758cf2bf-d514-4a17-88e5-463286f0a3e9","Type":"ContainerStarted","Data":"b0d0ee0ba7b7f241919e61aa65aaf973ffee2527a432bb540e81f6c3baab983b"} Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.026822 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-internal-tls-certs\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.027928 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7lsn\" (UniqueName: \"kubernetes.io/projected/c0e3f717-0113-4cb6-be1c-90a19ddf9ee9-kube-api-access-b7lsn\") pod \"neutron-854b94d7cf-txq64\" (UID: \"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9\") " pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.042095 4823 generic.go:334] "Generic (PLEG): container finished" podID="5fff24cc-23b2-48e1-af92-218fefa1ff89" containerID="3da3743610842f3fc1856404fe9012924932ea9a39688db0637f689dadc8255d" exitCode=0 Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.042587 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" event={"ID":"5fff24cc-23b2-48e1-af92-218fefa1ff89","Type":"ContainerDied","Data":"3da3743610842f3fc1856404fe9012924932ea9a39688db0637f689dadc8255d"} Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.234174 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:13 crc kubenswrapper[4823]: I0126 15:07:13.453447 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cc598b456-74bc7"] Jan 26 15:07:13 crc kubenswrapper[4823]: W0126 15:07:13.569565 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57e802a1_56bd_42e5_b02b_15877d9a33e3.slice/crio-5f02d9168a0cb48c575ef2a5410624e0ed1b18ad9c084633a57338144ebe2f54 WatchSource:0}: Error finding container 5f02d9168a0cb48c575ef2a5410624e0ed1b18ad9c084633a57338144ebe2f54: Status 404 returned error can't find the container with id 5f02d9168a0cb48c575ef2a5410624e0ed1b18ad9c084633a57338144ebe2f54 Jan 26 15:07:14 crc kubenswrapper[4823]: I0126 15:07:14.076540 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cc598b456-74bc7" event={"ID":"57e802a1-56bd-42e5-b02b-15877d9a33e3","Type":"ContainerStarted","Data":"5f02d9168a0cb48c575ef2a5410624e0ed1b18ad9c084633a57338144ebe2f54"} Jan 26 15:07:14 crc kubenswrapper[4823]: I0126 15:07:14.077553 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:14 crc kubenswrapper[4823]: I0126 15:07:14.166717 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-76d987df64-77wdm" podStartSLOduration=5.166695312 podStartE2EDuration="5.166695312s" podCreationTimestamp="2026-01-26 15:07:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:07:14.132486689 +0000 UTC m=+1230.817949794" watchObservedRunningTime="2026-01-26 15:07:14.166695312 +0000 UTC m=+1230.852158417" Jan 26 15:07:14 crc kubenswrapper[4823]: I0126 15:07:14.504126 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-854b94d7cf-txq64"] Jan 26 15:07:15 crc kubenswrapper[4823]: I0126 15:07:15.095632 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cc598b456-74bc7" event={"ID":"57e802a1-56bd-42e5-b02b-15877d9a33e3","Type":"ContainerStarted","Data":"f0ac807f6237526bbbe0dd9e8b1cac05871d02fce4ad5dc5f9449d2f17d469ad"} Jan 26 15:07:15 crc kubenswrapper[4823]: I0126 15:07:15.096107 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cc598b456-74bc7" event={"ID":"57e802a1-56bd-42e5-b02b-15877d9a33e3","Type":"ContainerStarted","Data":"739316c7f2f47efce5699a6a4070b6f0af115c40ada29cddb689c6c24f6b46ee"} Jan 26 15:07:15 crc kubenswrapper[4823]: I0126 15:07:15.096429 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:15 crc kubenswrapper[4823]: I0126 15:07:15.105784 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-854b94d7cf-txq64" event={"ID":"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9","Type":"ContainerStarted","Data":"2bb14c057eacb78b3aa06e81ee67afd7264692755c5671f44a098ffa5f338662"} Jan 26 15:07:15 crc kubenswrapper[4823]: I0126 15:07:15.105854 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-854b94d7cf-txq64" event={"ID":"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9","Type":"ContainerStarted","Data":"9badb4203b62815afa54ee5e43e7b2eeafaddc467eb3cfca930d14e87d4a275c"} Jan 26 15:07:15 crc kubenswrapper[4823]: I0126 15:07:15.117640 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" event={"ID":"5fff24cc-23b2-48e1-af92-218fefa1ff89","Type":"ContainerStarted","Data":"b238185c5a2a4b6a9ca0e56e9e5a331c3d903fd0d076829bd1be4df28216bfeb"} Jan 26 15:07:15 crc kubenswrapper[4823]: I0126 15:07:15.118083 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:15 crc kubenswrapper[4823]: I0126 15:07:15.162786 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5cc598b456-74bc7" podStartSLOduration=5.162758561 podStartE2EDuration="5.162758561s" podCreationTimestamp="2026-01-26 15:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:07:15.152595574 +0000 UTC m=+1231.838058679" watchObservedRunningTime="2026-01-26 15:07:15.162758561 +0000 UTC m=+1231.848221666" Jan 26 15:07:15 crc kubenswrapper[4823]: I0126 15:07:15.203741 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" podStartSLOduration=5.203711809 podStartE2EDuration="5.203711809s" podCreationTimestamp="2026-01-26 15:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:07:15.18689267 +0000 UTC m=+1231.872355775" watchObservedRunningTime="2026-01-26 15:07:15.203711809 +0000 UTC m=+1231.889174904" Jan 26 15:07:16 crc kubenswrapper[4823]: I0126 15:07:16.132693 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-854b94d7cf-txq64" event={"ID":"c0e3f717-0113-4cb6-be1c-90a19ddf9ee9","Type":"ContainerStarted","Data":"c8df495dc1c6197a21760c9699ee1e1f959bc166a6c6b35af8e09edaf99439ae"} Jan 26 15:07:16 crc kubenswrapper[4823]: I0126 15:07:16.170592 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-854b94d7cf-txq64" podStartSLOduration=4.17056378 podStartE2EDuration="4.17056378s" podCreationTimestamp="2026-01-26 15:07:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:07:16.160992689 +0000 UTC m=+1232.846455804" watchObservedRunningTime="2026-01-26 15:07:16.17056378 +0000 UTC m=+1232.856026885" Jan 26 15:07:17 crc kubenswrapper[4823]: I0126 15:07:17.147657 4823 generic.go:334] "Generic (PLEG): container finished" podID="c5c91a8b-7077-4583-aa19-595408fb9003" containerID="02d0134d4ecb0e0d29dd85b9d5c98ec01b3ac4c257702bf1299e26fd6e12286c" exitCode=0 Jan 26 15:07:17 crc kubenswrapper[4823]: I0126 15:07:17.147908 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fs2xh" event={"ID":"c5c91a8b-7077-4583-aa19-595408fb9003","Type":"ContainerDied","Data":"02d0134d4ecb0e0d29dd85b9d5c98ec01b3ac4c257702bf1299e26fd6e12286c"} Jan 26 15:07:17 crc kubenswrapper[4823]: I0126 15:07:17.149082 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:20 crc kubenswrapper[4823]: I0126 15:07:20.565640 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:20 crc kubenswrapper[4823]: I0126 15:07:20.646648 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj"] Jan 26 15:07:20 crc kubenswrapper[4823]: I0126 15:07:20.647037 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" podUID="bfb8b15e-b589-4777-ab0d-703cba188a74" containerName="dnsmasq-dns" containerID="cri-o://3b9225fb996b43c0f44eef1ec3f8b759269219172406a6c92d4fb6fd03b0c96b" gracePeriod=10 Jan 26 15:07:21 crc kubenswrapper[4823]: I0126 15:07:21.206933 4823 generic.go:334] "Generic (PLEG): container finished" podID="bfb8b15e-b589-4777-ab0d-703cba188a74" containerID="3b9225fb996b43c0f44eef1ec3f8b759269219172406a6c92d4fb6fd03b0c96b" exitCode=0 Jan 26 15:07:21 crc kubenswrapper[4823]: I0126 15:07:21.207042 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" event={"ID":"bfb8b15e-b589-4777-ab0d-703cba188a74","Type":"ContainerDied","Data":"3b9225fb996b43c0f44eef1ec3f8b759269219172406a6c92d4fb6fd03b0c96b"} Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.398432 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fs2xh" Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.487336 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c91a8b-7077-4583-aa19-595408fb9003-logs\") pod \"c5c91a8b-7077-4583-aa19-595408fb9003\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.488416 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-config-data\") pod \"c5c91a8b-7077-4583-aa19-595408fb9003\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.489517 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-combined-ca-bundle\") pod \"c5c91a8b-7077-4583-aa19-595408fb9003\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.489956 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-scripts\") pod \"c5c91a8b-7077-4583-aa19-595408fb9003\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.490113 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbp7g\" (UniqueName: \"kubernetes.io/projected/c5c91a8b-7077-4583-aa19-595408fb9003-kube-api-access-cbp7g\") pod \"c5c91a8b-7077-4583-aa19-595408fb9003\" (UID: \"c5c91a8b-7077-4583-aa19-595408fb9003\") " Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.488060 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5c91a8b-7077-4583-aa19-595408fb9003-logs" (OuterVolumeSpecName: "logs") pod "c5c91a8b-7077-4583-aa19-595408fb9003" (UID: "c5c91a8b-7077-4583-aa19-595408fb9003"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.492310 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5c91a8b-7077-4583-aa19-595408fb9003-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.497594 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-scripts" (OuterVolumeSpecName: "scripts") pod "c5c91a8b-7077-4583-aa19-595408fb9003" (UID: "c5c91a8b-7077-4583-aa19-595408fb9003"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.498492 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c91a8b-7077-4583-aa19-595408fb9003-kube-api-access-cbp7g" (OuterVolumeSpecName: "kube-api-access-cbp7g") pod "c5c91a8b-7077-4583-aa19-595408fb9003" (UID: "c5c91a8b-7077-4583-aa19-595408fb9003"). InnerVolumeSpecName "kube-api-access-cbp7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.499839 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" podUID="bfb8b15e-b589-4777-ab0d-703cba188a74" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.518676 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-config-data" (OuterVolumeSpecName: "config-data") pod "c5c91a8b-7077-4583-aa19-595408fb9003" (UID: "c5c91a8b-7077-4583-aa19-595408fb9003"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.528735 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5c91a8b-7077-4583-aa19-595408fb9003" (UID: "c5c91a8b-7077-4583-aa19-595408fb9003"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.594423 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.594551 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.594619 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5c91a8b-7077-4583-aa19-595408fb9003-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:23 crc kubenswrapper[4823]: I0126 15:07:23.594677 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbp7g\" (UniqueName: \"kubernetes.io/projected/c5c91a8b-7077-4583-aa19-595408fb9003-kube-api-access-cbp7g\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.131165 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.161350 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.234959 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-fs2xh" event={"ID":"c5c91a8b-7077-4583-aa19-595408fb9003","Type":"ContainerDied","Data":"d2f22428b889a5a1c2d7d606d82879a851941ee6f994636d1e387bee84066b09"} Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.235020 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2f22428b889a5a1c2d7d606d82879a851941ee6f994636d1e387bee84066b09" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.235110 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-fs2xh" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.561649 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7c575987db-2cpjc"] Jan 26 15:07:24 crc kubenswrapper[4823]: E0126 15:07:24.562449 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c91a8b-7077-4583-aa19-595408fb9003" containerName="placement-db-sync" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.562467 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c91a8b-7077-4583-aa19-595408fb9003" containerName="placement-db-sync" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.562651 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c91a8b-7077-4583-aa19-595408fb9003" containerName="placement-db-sync" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.563537 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.566330 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.566712 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.566882 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.567095 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k7zsm" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.568076 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.581868 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c575987db-2cpjc"] Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.719166 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-internal-tls-certs\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.719278 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9bt7\" (UniqueName: \"kubernetes.io/projected/97f649fc-fc2f-4b59-8ed6-0f7c31426519-kube-api-access-k9bt7\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.719307 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-scripts\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.719386 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-config-data\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.719461 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-public-tls-certs\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.719499 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f649fc-fc2f-4b59-8ed6-0f7c31426519-logs\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.719515 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-combined-ca-bundle\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.820958 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9bt7\" (UniqueName: \"kubernetes.io/projected/97f649fc-fc2f-4b59-8ed6-0f7c31426519-kube-api-access-k9bt7\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.821013 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-scripts\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.821051 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-config-data\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.821091 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-public-tls-certs\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.821123 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f649fc-fc2f-4b59-8ed6-0f7c31426519-logs\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.821141 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-combined-ca-bundle\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.821170 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-internal-tls-certs\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.823085 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97f649fc-fc2f-4b59-8ed6-0f7c31426519-logs\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.826444 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-internal-tls-certs\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.826932 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-public-tls-certs\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:24 crc kubenswrapper[4823]: I0126 15:07:24.829650 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-config-data\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:25 crc kubenswrapper[4823]: I0126 15:07:25.214475 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-scripts\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:25 crc kubenswrapper[4823]: I0126 15:07:25.214854 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97f649fc-fc2f-4b59-8ed6-0f7c31426519-combined-ca-bundle\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:25 crc kubenswrapper[4823]: I0126 15:07:25.217968 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9bt7\" (UniqueName: \"kubernetes.io/projected/97f649fc-fc2f-4b59-8ed6-0f7c31426519-kube-api-access-k9bt7\") pod \"placement-7c575987db-2cpjc\" (UID: \"97f649fc-fc2f-4b59-8ed6-0f7c31426519\") " pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:25 crc kubenswrapper[4823]: I0126 15:07:25.484935 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:25 crc kubenswrapper[4823]: I0126 15:07:25.939073 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-6c6cbf99d4-vbwh8" Jan 26 15:07:26 crc kubenswrapper[4823]: I0126 15:07:26.010408 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75dbc957cb-ckfwc"] Jan 26 15:07:26 crc kubenswrapper[4823]: I0126 15:07:26.010713 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75dbc957cb-ckfwc" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon-log" containerID="cri-o://d81b813c0f64e4fbf4e118c4fde902c88138d105ee75288b1a27977e3f090c94" gracePeriod=30 Jan 26 15:07:26 crc kubenswrapper[4823]: I0126 15:07:26.011443 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-75dbc957cb-ckfwc" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon" containerID="cri-o://ae177c04f8f1dcb05bd9666d753f2e2bd9fda6779e26a6637805474c893cb5fe" gracePeriod=30 Jan 26 15:07:26 crc kubenswrapper[4823]: I0126 15:07:26.024803 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75dbc957cb-ckfwc" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Jan 26 15:07:28 crc kubenswrapper[4823]: I0126 15:07:28.273143 4823 generic.go:334] "Generic (PLEG): container finished" podID="7826fb1c-12c8-42a5-9431-7d22aa0ea308" containerID="085b93c655b76f2d33410a85aa5aa5f9d70a889a0c0a1851f70c687913c24d64" exitCode=137 Jan 26 15:07:28 crc kubenswrapper[4823]: I0126 15:07:28.274000 4823 generic.go:334] "Generic (PLEG): container finished" podID="7826fb1c-12c8-42a5-9431-7d22aa0ea308" containerID="b287f0bdeb2297f752e9a2d6164fd625a912e48bceae83a2756d0cfabcfb8af0" exitCode=137 Jan 26 15:07:28 crc kubenswrapper[4823]: I0126 15:07:28.273574 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56c59d768f-mmj9s" event={"ID":"7826fb1c-12c8-42a5-9431-7d22aa0ea308","Type":"ContainerDied","Data":"085b93c655b76f2d33410a85aa5aa5f9d70a889a0c0a1851f70c687913c24d64"} Jan 26 15:07:28 crc kubenswrapper[4823]: I0126 15:07:28.274082 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56c59d768f-mmj9s" event={"ID":"7826fb1c-12c8-42a5-9431-7d22aa0ea308","Type":"ContainerDied","Data":"b287f0bdeb2297f752e9a2d6164fd625a912e48bceae83a2756d0cfabcfb8af0"} Jan 26 15:07:29 crc kubenswrapper[4823]: I0126 15:07:29.171659 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75dbc957cb-ckfwc" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:55118->10.217.0.142:8443: read: connection reset by peer" Jan 26 15:07:31 crc kubenswrapper[4823]: I0126 15:07:31.318457 4823 generic.go:334] "Generic (PLEG): container finished" podID="4f681696-41f2-470d-805c-5b70ea803542" containerID="ae177c04f8f1dcb05bd9666d753f2e2bd9fda6779e26a6637805474c893cb5fe" exitCode=0 Jan 26 15:07:31 crc kubenswrapper[4823]: I0126 15:07:31.318580 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dbc957cb-ckfwc" event={"ID":"4f681696-41f2-470d-805c-5b70ea803542","Type":"ContainerDied","Data":"ae177c04f8f1dcb05bd9666d753f2e2bd9fda6779e26a6637805474c893cb5fe"} Jan 26 15:07:31 crc kubenswrapper[4823]: I0126 15:07:31.792874 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75dbc957cb-ckfwc" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Jan 26 15:07:32 crc kubenswrapper[4823]: E0126 15:07:32.782262 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 26 15:07:32 crc kubenswrapper[4823]: E0126 15:07:32.782910 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-694zm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(7565d18e-0ce0-432d-ab8f-10c43561b9f8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 26 15:07:32 crc kubenswrapper[4823]: E0126 15:07:32.784115 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="7565d18e-0ce0-432d-ab8f-10c43561b9f8" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.018268 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.104554 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-dns-svc\") pod \"bfb8b15e-b589-4777-ab0d-703cba188a74\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.105040 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-ovsdbserver-sb\") pod \"bfb8b15e-b589-4777-ab0d-703cba188a74\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.105101 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-config\") pod \"bfb8b15e-b589-4777-ab0d-703cba188a74\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.105259 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgdwq\" (UniqueName: \"kubernetes.io/projected/bfb8b15e-b589-4777-ab0d-703cba188a74-kube-api-access-dgdwq\") pod \"bfb8b15e-b589-4777-ab0d-703cba188a74\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.105318 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-ovsdbserver-nb\") pod \"bfb8b15e-b589-4777-ab0d-703cba188a74\" (UID: \"bfb8b15e-b589-4777-ab0d-703cba188a74\") " Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.118942 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfb8b15e-b589-4777-ab0d-703cba188a74-kube-api-access-dgdwq" (OuterVolumeSpecName: "kube-api-access-dgdwq") pod "bfb8b15e-b589-4777-ab0d-703cba188a74" (UID: "bfb8b15e-b589-4777-ab0d-703cba188a74"). InnerVolumeSpecName "kube-api-access-dgdwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.175387 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-config" (OuterVolumeSpecName: "config") pod "bfb8b15e-b589-4777-ab0d-703cba188a74" (UID: "bfb8b15e-b589-4777-ab0d-703cba188a74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.184346 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bfb8b15e-b589-4777-ab0d-703cba188a74" (UID: "bfb8b15e-b589-4777-ab0d-703cba188a74"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.208671 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bfb8b15e-b589-4777-ab0d-703cba188a74" (UID: "bfb8b15e-b589-4777-ab0d-703cba188a74"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.210255 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.210281 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.210293 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgdwq\" (UniqueName: \"kubernetes.io/projected/bfb8b15e-b589-4777-ab0d-703cba188a74-kube-api-access-dgdwq\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.210309 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.246672 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bfb8b15e-b589-4777-ab0d-703cba188a74" (UID: "bfb8b15e-b589-4777-ab0d-703cba188a74"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.258531 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.311614 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7826fb1c-12c8-42a5-9431-7d22aa0ea308-logs\") pod \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.311668 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7826fb1c-12c8-42a5-9431-7d22aa0ea308-scripts\") pod \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.311785 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7826fb1c-12c8-42a5-9431-7d22aa0ea308-config-data\") pod \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.311831 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7826fb1c-12c8-42a5-9431-7d22aa0ea308-horizon-secret-key\") pod \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.311968 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm28p\" (UniqueName: \"kubernetes.io/projected/7826fb1c-12c8-42a5-9431-7d22aa0ea308-kube-api-access-bm28p\") pod \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\" (UID: \"7826fb1c-12c8-42a5-9431-7d22aa0ea308\") " Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.312322 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfb8b15e-b589-4777-ab0d-703cba188a74-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.315995 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7826fb1c-12c8-42a5-9431-7d22aa0ea308-logs" (OuterVolumeSpecName: "logs") pod "7826fb1c-12c8-42a5-9431-7d22aa0ea308" (UID: "7826fb1c-12c8-42a5-9431-7d22aa0ea308"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.317110 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7826fb1c-12c8-42a5-9431-7d22aa0ea308-kube-api-access-bm28p" (OuterVolumeSpecName: "kube-api-access-bm28p") pod "7826fb1c-12c8-42a5-9431-7d22aa0ea308" (UID: "7826fb1c-12c8-42a5-9431-7d22aa0ea308"). InnerVolumeSpecName "kube-api-access-bm28p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.318482 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7826fb1c-12c8-42a5-9431-7d22aa0ea308-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "7826fb1c-12c8-42a5-9431-7d22aa0ea308" (UID: "7826fb1c-12c8-42a5-9431-7d22aa0ea308"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.340469 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7826fb1c-12c8-42a5-9431-7d22aa0ea308-config-data" (OuterVolumeSpecName: "config-data") pod "7826fb1c-12c8-42a5-9431-7d22aa0ea308" (UID: "7826fb1c-12c8-42a5-9431-7d22aa0ea308"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.343047 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7826fb1c-12c8-42a5-9431-7d22aa0ea308-scripts" (OuterVolumeSpecName: "scripts") pod "7826fb1c-12c8-42a5-9431-7d22aa0ea308" (UID: "7826fb1c-12c8-42a5-9431-7d22aa0ea308"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.360700 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56c59d768f-mmj9s" event={"ID":"7826fb1c-12c8-42a5-9431-7d22aa0ea308","Type":"ContainerDied","Data":"aee41cb976603807b906947dd09406d73036fe70c70ab0495f1d21bd78e37c29"} Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.360770 4823 scope.go:117] "RemoveContainer" containerID="085b93c655b76f2d33410a85aa5aa5f9d70a889a0c0a1851f70c687913c24d64" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.360927 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56c59d768f-mmj9s" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.371759 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" event={"ID":"bfb8b15e-b589-4777-ab0d-703cba188a74","Type":"ContainerDied","Data":"4dd334135473b0fd4a7d2a7a9e9f36fe70ae019ed20c24c9cf036158360240fe"} Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.371795 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.371978 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7565d18e-0ce0-432d-ab8f-10c43561b9f8" containerName="ceilometer-notification-agent" containerID="cri-o://a4166120b7b3bc522ddea0b888416fef22a7fa63ef2caf28bfd0dcd259400da4" gracePeriod=30 Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.414262 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm28p\" (UniqueName: \"kubernetes.io/projected/7826fb1c-12c8-42a5-9431-7d22aa0ea308-kube-api-access-bm28p\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.414301 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7826fb1c-12c8-42a5-9431-7d22aa0ea308-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.414314 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7826fb1c-12c8-42a5-9431-7d22aa0ea308-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.414325 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7826fb1c-12c8-42a5-9431-7d22aa0ea308-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.414338 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/7826fb1c-12c8-42a5-9431-7d22aa0ea308-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.430494 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56c59d768f-mmj9s"] Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.442456 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56c59d768f-mmj9s"] Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.454172 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj"] Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.463452 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj"] Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.501215 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b6dbdb6f5-q2tfj" podUID="bfb8b15e-b589-4777-ab0d-703cba188a74" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: i/o timeout" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.529186 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7c575987db-2cpjc"] Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.588279 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7826fb1c-12c8-42a5-9431-7d22aa0ea308" path="/var/lib/kubelet/pods/7826fb1c-12c8-42a5-9431-7d22aa0ea308/volumes" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.588568 4823 scope.go:117] "RemoveContainer" containerID="b287f0bdeb2297f752e9a2d6164fd625a912e48bceae83a2756d0cfabcfb8af0" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.605462 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfb8b15e-b589-4777-ab0d-703cba188a74" path="/var/lib/kubelet/pods/bfb8b15e-b589-4777-ab0d-703cba188a74/volumes" Jan 26 15:07:33 crc kubenswrapper[4823]: W0126 15:07:33.609701 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97f649fc_fc2f_4b59_8ed6_0f7c31426519.slice/crio-aae1b31fe95198125f082f67228083e6a8caf7ece0e052e34e5c8bfcdb26c139 WatchSource:0}: Error finding container aae1b31fe95198125f082f67228083e6a8caf7ece0e052e34e5c8bfcdb26c139: Status 404 returned error can't find the container with id aae1b31fe95198125f082f67228083e6a8caf7ece0e052e34e5c8bfcdb26c139 Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.613565 4823 scope.go:117] "RemoveContainer" containerID="3b9225fb996b43c0f44eef1ec3f8b759269219172406a6c92d4fb6fd03b0c96b" Jan 26 15:07:33 crc kubenswrapper[4823]: I0126 15:07:33.633685 4823 scope.go:117] "RemoveContainer" containerID="79293303cbf1cce027eb6bed38d5c7c790214d79ce23e19f0cb4559c08637b8d" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.389109 4823 generic.go:334] "Generic (PLEG): container finished" podID="7565d18e-0ce0-432d-ab8f-10c43561b9f8" containerID="a4166120b7b3bc522ddea0b888416fef22a7fa63ef2caf28bfd0dcd259400da4" exitCode=0 Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.389225 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7565d18e-0ce0-432d-ab8f-10c43561b9f8","Type":"ContainerDied","Data":"a4166120b7b3bc522ddea0b888416fef22a7fa63ef2caf28bfd0dcd259400da4"} Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.398236 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c575987db-2cpjc" event={"ID":"97f649fc-fc2f-4b59-8ed6-0f7c31426519","Type":"ContainerStarted","Data":"ed55a158eeff337fd11a35bcafa08b896b326c4c73874e79aba0e2aa01d6506e"} Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.398294 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c575987db-2cpjc" event={"ID":"97f649fc-fc2f-4b59-8ed6-0f7c31426519","Type":"ContainerStarted","Data":"ed00ff381c1b85dbc6d11ded321b8c410fafd4843f595d49e1e12fbedb991475"} Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.398308 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7c575987db-2cpjc" event={"ID":"97f649fc-fc2f-4b59-8ed6-0f7c31426519","Type":"ContainerStarted","Data":"aae1b31fe95198125f082f67228083e6a8caf7ece0e052e34e5c8bfcdb26c139"} Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.398439 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.426629 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7c575987db-2cpjc" podStartSLOduration=10.426610599 podStartE2EDuration="10.426610599s" podCreationTimestamp="2026-01-26 15:07:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:07:34.426437485 +0000 UTC m=+1251.111900590" watchObservedRunningTime="2026-01-26 15:07:34.426610599 +0000 UTC m=+1251.112073704" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.507150 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.552841 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7565d18e-0ce0-432d-ab8f-10c43561b9f8-log-httpd\") pod \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.552927 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-694zm\" (UniqueName: \"kubernetes.io/projected/7565d18e-0ce0-432d-ab8f-10c43561b9f8-kube-api-access-694zm\") pod \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.552987 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-scripts\") pod \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.553046 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-config-data\") pod \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.553110 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-sg-core-conf-yaml\") pod \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.553139 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-combined-ca-bundle\") pod \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.553169 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7565d18e-0ce0-432d-ab8f-10c43561b9f8-run-httpd\") pod \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\" (UID: \"7565d18e-0ce0-432d-ab8f-10c43561b9f8\") " Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.553297 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7565d18e-0ce0-432d-ab8f-10c43561b9f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7565d18e-0ce0-432d-ab8f-10c43561b9f8" (UID: "7565d18e-0ce0-432d-ab8f-10c43561b9f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.553534 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7565d18e-0ce0-432d-ab8f-10c43561b9f8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.553807 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7565d18e-0ce0-432d-ab8f-10c43561b9f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7565d18e-0ce0-432d-ab8f-10c43561b9f8" (UID: "7565d18e-0ce0-432d-ab8f-10c43561b9f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.559725 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7565d18e-0ce0-432d-ab8f-10c43561b9f8" (UID: "7565d18e-0ce0-432d-ab8f-10c43561b9f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.559779 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-scripts" (OuterVolumeSpecName: "scripts") pod "7565d18e-0ce0-432d-ab8f-10c43561b9f8" (UID: "7565d18e-0ce0-432d-ab8f-10c43561b9f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.560503 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7565d18e-0ce0-432d-ab8f-10c43561b9f8-kube-api-access-694zm" (OuterVolumeSpecName: "kube-api-access-694zm") pod "7565d18e-0ce0-432d-ab8f-10c43561b9f8" (UID: "7565d18e-0ce0-432d-ab8f-10c43561b9f8"). InnerVolumeSpecName "kube-api-access-694zm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.580910 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7565d18e-0ce0-432d-ab8f-10c43561b9f8" (UID: "7565d18e-0ce0-432d-ab8f-10c43561b9f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.582179 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-config-data" (OuterVolumeSpecName: "config-data") pod "7565d18e-0ce0-432d-ab8f-10c43561b9f8" (UID: "7565d18e-0ce0-432d-ab8f-10c43561b9f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.655317 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-694zm\" (UniqueName: \"kubernetes.io/projected/7565d18e-0ce0-432d-ab8f-10c43561b9f8-kube-api-access-694zm\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.655384 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.655398 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.655413 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.655425 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7565d18e-0ce0-432d-ab8f-10c43561b9f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:34 crc kubenswrapper[4823]: I0126 15:07:34.655439 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7565d18e-0ce0-432d-ab8f-10c43561b9f8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.426514 4823 generic.go:334] "Generic (PLEG): container finished" podID="2dcb08f2-c175-4602-9a45-dad635436a22" containerID="31a9626d076460a91c0c8ab4199e737a293e7dd11fdfb153b6b506e88f02e14d" exitCode=0 Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.426642 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nn9br" event={"ID":"2dcb08f2-c175-4602-9a45-dad635436a22","Type":"ContainerDied","Data":"31a9626d076460a91c0c8ab4199e737a293e7dd11fdfb153b6b506e88f02e14d"} Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.433811 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.433788 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7565d18e-0ce0-432d-ab8f-10c43561b9f8","Type":"ContainerDied","Data":"c252aa106c5a1e25a6c8df640a52b48eb37ea72b1cc21b61d0af29d5ea112fde"} Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.433915 4823 scope.go:117] "RemoveContainer" containerID="a4166120b7b3bc522ddea0b888416fef22a7fa63ef2caf28bfd0dcd259400da4" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.433933 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.505610 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.514787 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.541322 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:07:35 crc kubenswrapper[4823]: E0126 15:07:35.541884 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7565d18e-0ce0-432d-ab8f-10c43561b9f8" containerName="ceilometer-notification-agent" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.541913 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7565d18e-0ce0-432d-ab8f-10c43561b9f8" containerName="ceilometer-notification-agent" Jan 26 15:07:35 crc kubenswrapper[4823]: E0126 15:07:35.541929 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb8b15e-b589-4777-ab0d-703cba188a74" containerName="dnsmasq-dns" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.541937 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8b15e-b589-4777-ab0d-703cba188a74" containerName="dnsmasq-dns" Jan 26 15:07:35 crc kubenswrapper[4823]: E0126 15:07:35.541955 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7826fb1c-12c8-42a5-9431-7d22aa0ea308" containerName="horizon" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.541963 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7826fb1c-12c8-42a5-9431-7d22aa0ea308" containerName="horizon" Jan 26 15:07:35 crc kubenswrapper[4823]: E0126 15:07:35.541982 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7826fb1c-12c8-42a5-9431-7d22aa0ea308" containerName="horizon-log" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.541988 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7826fb1c-12c8-42a5-9431-7d22aa0ea308" containerName="horizon-log" Jan 26 15:07:35 crc kubenswrapper[4823]: E0126 15:07:35.542007 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfb8b15e-b589-4777-ab0d-703cba188a74" containerName="init" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.542012 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfb8b15e-b589-4777-ab0d-703cba188a74" containerName="init" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.542243 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7565d18e-0ce0-432d-ab8f-10c43561b9f8" containerName="ceilometer-notification-agent" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.542267 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7826fb1c-12c8-42a5-9431-7d22aa0ea308" containerName="horizon-log" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.542280 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfb8b15e-b589-4777-ab0d-703cba188a74" containerName="dnsmasq-dns" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.542292 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7826fb1c-12c8-42a5-9431-7d22aa0ea308" containerName="horizon" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.544159 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.549854 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.550035 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.556808 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.578293 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7565d18e-0ce0-432d-ab8f-10c43561b9f8" path="/var/lib/kubelet/pods/7565d18e-0ce0-432d-ab8f-10c43561b9f8/volumes" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.678856 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-scripts\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.679075 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc957d6-b6e5-4fad-91cb-e78f450611c9-log-httpd\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.679239 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.679372 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc957d6-b6e5-4fad-91cb-e78f450611c9-run-httpd\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.679500 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz447\" (UniqueName: \"kubernetes.io/projected/2fc957d6-b6e5-4fad-91cb-e78f450611c9-kube-api-access-fz447\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.679807 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.679873 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-config-data\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.781039 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc957d6-b6e5-4fad-91cb-e78f450611c9-log-httpd\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.781136 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.781190 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc957d6-b6e5-4fad-91cb-e78f450611c9-run-httpd\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.781270 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz447\" (UniqueName: \"kubernetes.io/projected/2fc957d6-b6e5-4fad-91cb-e78f450611c9-kube-api-access-fz447\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.781355 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.781410 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-config-data\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.781435 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-scripts\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.781729 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc957d6-b6e5-4fad-91cb-e78f450611c9-log-httpd\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.781877 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc957d6-b6e5-4fad-91cb-e78f450611c9-run-httpd\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.787764 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-config-data\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.788586 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-scripts\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.789151 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.789604 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.813571 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz447\" (UniqueName: \"kubernetes.io/projected/2fc957d6-b6e5-4fad-91cb-e78f450611c9-kube-api-access-fz447\") pod \"ceilometer-0\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " pod="openstack/ceilometer-0" Jan 26 15:07:35 crc kubenswrapper[4823]: I0126 15:07:35.871766 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.356853 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:07:36 crc kubenswrapper[4823]: W0126 15:07:36.365655 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2fc957d6_b6e5_4fad_91cb_e78f450611c9.slice/crio-927d31f868c9e6f5dfb1aa82516a9eb78e0882da0cb447d8d64c6e5d7137c74e WatchSource:0}: Error finding container 927d31f868c9e6f5dfb1aa82516a9eb78e0882da0cb447d8d64c6e5d7137c74e: Status 404 returned error can't find the container with id 927d31f868c9e6f5dfb1aa82516a9eb78e0882da0cb447d8d64c6e5d7137c74e Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.448024 4823 generic.go:334] "Generic (PLEG): container finished" podID="3ae97ed0-0d88-4581-ab58-b4a97f8947ad" containerID="f03f17b4a7717d511696f689b0841e0f3a144d1f42906d3ec3d9f58c22e254ef" exitCode=0 Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.449413 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qx574" event={"ID":"3ae97ed0-0d88-4581-ab58-b4a97f8947ad","Type":"ContainerDied","Data":"f03f17b4a7717d511696f689b0841e0f3a144d1f42906d3ec3d9f58c22e254ef"} Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.458393 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc957d6-b6e5-4fad-91cb-e78f450611c9","Type":"ContainerStarted","Data":"927d31f868c9e6f5dfb1aa82516a9eb78e0882da0cb447d8d64c6e5d7137c74e"} Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.749974 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nn9br" Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.812353 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2dcb08f2-c175-4602-9a45-dad635436a22-db-sync-config-data\") pod \"2dcb08f2-c175-4602-9a45-dad635436a22\" (UID: \"2dcb08f2-c175-4602-9a45-dad635436a22\") " Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.812474 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcb08f2-c175-4602-9a45-dad635436a22-combined-ca-bundle\") pod \"2dcb08f2-c175-4602-9a45-dad635436a22\" (UID: \"2dcb08f2-c175-4602-9a45-dad635436a22\") " Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.812506 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc6bv\" (UniqueName: \"kubernetes.io/projected/2dcb08f2-c175-4602-9a45-dad635436a22-kube-api-access-lc6bv\") pod \"2dcb08f2-c175-4602-9a45-dad635436a22\" (UID: \"2dcb08f2-c175-4602-9a45-dad635436a22\") " Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.825161 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dcb08f2-c175-4602-9a45-dad635436a22-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2dcb08f2-c175-4602-9a45-dad635436a22" (UID: "2dcb08f2-c175-4602-9a45-dad635436a22"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.825172 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dcb08f2-c175-4602-9a45-dad635436a22-kube-api-access-lc6bv" (OuterVolumeSpecName: "kube-api-access-lc6bv") pod "2dcb08f2-c175-4602-9a45-dad635436a22" (UID: "2dcb08f2-c175-4602-9a45-dad635436a22"). InnerVolumeSpecName "kube-api-access-lc6bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.861573 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dcb08f2-c175-4602-9a45-dad635436a22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2dcb08f2-c175-4602-9a45-dad635436a22" (UID: "2dcb08f2-c175-4602-9a45-dad635436a22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.917906 4823 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2dcb08f2-c175-4602-9a45-dad635436a22-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.917948 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2dcb08f2-c175-4602-9a45-dad635436a22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:36 crc kubenswrapper[4823]: I0126 15:07:36.917959 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc6bv\" (UniqueName: \"kubernetes.io/projected/2dcb08f2-c175-4602-9a45-dad635436a22-kube-api-access-lc6bv\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.485557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-nn9br" event={"ID":"2dcb08f2-c175-4602-9a45-dad635436a22","Type":"ContainerDied","Data":"6a2ae3427f3e1ae7afae0a1ae327646813b949f0d0e3ee59cca17ac543f51924"} Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.486041 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a2ae3427f3e1ae7afae0a1ae327646813b949f0d0e3ee59cca17ac543f51924" Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.486127 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-nn9br" Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.510447 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc957d6-b6e5-4fad-91cb-e78f450611c9","Type":"ContainerStarted","Data":"80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f"} Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.896748 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-75d68448b6-k48rf"] Jan 26 15:07:37 crc kubenswrapper[4823]: E0126 15:07:37.913863 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dcb08f2-c175-4602-9a45-dad635436a22" containerName="barbican-db-sync" Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.913912 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dcb08f2-c175-4602-9a45-dad635436a22" containerName="barbican-db-sync" Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.914219 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dcb08f2-c175-4602-9a45-dad635436a22" containerName="barbican-db-sync" Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.915460 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.919808 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.927930 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.928659 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-786584bf8c-z6fpx"] Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.928720 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-26l4g" Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.937335 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-75d68448b6-k48rf"] Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.937505 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.939216 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 26 15:07:37 crc kubenswrapper[4823]: I0126 15:07:37.952482 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-786584bf8c-z6fpx"] Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.040258 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f23e9d9-0eae-4911-af41-a71424a974f7-logs\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.040307 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f23e9d9-0eae-4911-af41-a71424a974f7-config-data\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.040352 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k74v9\" (UniqueName: \"kubernetes.io/projected/1f23e9d9-0eae-4911-af41-a71424a974f7-kube-api-access-k74v9\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.040395 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/636bf50c-43c9-4d39-af26-187a531e84ad-logs\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.040417 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f23e9d9-0eae-4911-af41-a71424a974f7-config-data-custom\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.040448 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/636bf50c-43c9-4d39-af26-187a531e84ad-config-data-custom\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.040733 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f23e9d9-0eae-4911-af41-a71424a974f7-combined-ca-bundle\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.040928 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/636bf50c-43c9-4d39-af26-187a531e84ad-config-data\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.040994 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q4px\" (UniqueName: \"kubernetes.io/projected/636bf50c-43c9-4d39-af26-187a531e84ad-kube-api-access-8q4px\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.041079 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636bf50c-43c9-4d39-af26-187a531e84ad-combined-ca-bundle\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.069223 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-z9t5s"] Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.077963 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.103084 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qx574" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.105015 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-z9t5s"] Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.143709 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-dns-svc\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144043 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-config\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144087 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144119 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/636bf50c-43c9-4d39-af26-187a531e84ad-config-data\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144178 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8q4px\" (UniqueName: \"kubernetes.io/projected/636bf50c-43c9-4d39-af26-187a531e84ad-kube-api-access-8q4px\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144224 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636bf50c-43c9-4d39-af26-187a531e84ad-combined-ca-bundle\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144257 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f23e9d9-0eae-4911-af41-a71424a974f7-logs\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144284 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f23e9d9-0eae-4911-af41-a71424a974f7-config-data\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144310 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144423 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k74v9\" (UniqueName: \"kubernetes.io/projected/1f23e9d9-0eae-4911-af41-a71424a974f7-kube-api-access-k74v9\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144467 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/636bf50c-43c9-4d39-af26-187a531e84ad-logs\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144491 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dq4z\" (UniqueName: \"kubernetes.io/projected/c480078e-44d9-46bf-a90b-85e464edbdff-kube-api-access-5dq4z\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144516 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f23e9d9-0eae-4911-af41-a71424a974f7-config-data-custom\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144565 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/636bf50c-43c9-4d39-af26-187a531e84ad-config-data-custom\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.144631 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f23e9d9-0eae-4911-af41-a71424a974f7-combined-ca-bundle\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.147025 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f23e9d9-0eae-4911-af41-a71424a974f7-logs\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.148798 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/636bf50c-43c9-4d39-af26-187a531e84ad-logs\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.152106 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f23e9d9-0eae-4911-af41-a71424a974f7-combined-ca-bundle\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.156357 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/636bf50c-43c9-4d39-af26-187a531e84ad-config-data-custom\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.156465 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f23e9d9-0eae-4911-af41-a71424a974f7-config-data-custom\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.159647 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f23e9d9-0eae-4911-af41-a71424a974f7-config-data\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.162298 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/636bf50c-43c9-4d39-af26-187a531e84ad-combined-ca-bundle\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.189929 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/636bf50c-43c9-4d39-af26-187a531e84ad-config-data\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.212881 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k74v9\" (UniqueName: \"kubernetes.io/projected/1f23e9d9-0eae-4911-af41-a71424a974f7-kube-api-access-k74v9\") pod \"barbican-keystone-listener-75d68448b6-k48rf\" (UID: \"1f23e9d9-0eae-4911-af41-a71424a974f7\") " pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.228137 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8q4px\" (UniqueName: \"kubernetes.io/projected/636bf50c-43c9-4d39-af26-187a531e84ad-kube-api-access-8q4px\") pod \"barbican-worker-786584bf8c-z6fpx\" (UID: \"636bf50c-43c9-4d39-af26-187a531e84ad\") " pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.237704 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-558766bffd-sp28z"] Jan 26 15:07:38 crc kubenswrapper[4823]: E0126 15:07:38.238159 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ae97ed0-0d88-4581-ab58-b4a97f8947ad" containerName="cinder-db-sync" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.238179 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ae97ed0-0d88-4581-ab58-b4a97f8947ad" containerName="cinder-db-sync" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.238398 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ae97ed0-0d88-4581-ab58-b4a97f8947ad" containerName="cinder-db-sync" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.239333 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.243146 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.243598 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-558766bffd-sp28z"] Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.245561 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-config-data\") pod \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.245647 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-db-sync-config-data\") pod \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.245752 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brbn8\" (UniqueName: \"kubernetes.io/projected/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-kube-api-access-brbn8\") pod \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.245907 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-scripts\") pod \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.245934 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-combined-ca-bundle\") pod \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.245964 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-etc-machine-id\") pod \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\" (UID: \"3ae97ed0-0d88-4581-ab58-b4a97f8947ad\") " Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.246184 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c481587-9ea3-4191-b140-728f6e314195-logs\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.246219 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dq4z\" (UniqueName: \"kubernetes.io/projected/c480078e-44d9-46bf-a90b-85e464edbdff-kube-api-access-5dq4z\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.246248 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-config-data-custom\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.246302 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-combined-ca-bundle\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.246329 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-dns-svc\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.246348 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nvgw\" (UniqueName: \"kubernetes.io/projected/9c481587-9ea3-4191-b140-728f6e314195-kube-api-access-2nvgw\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.246403 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-config\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.246434 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.246482 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-config-data\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.246504 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.247548 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.248783 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-config\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.248793 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-dns-svc\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.249333 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.256676 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3ae97ed0-0d88-4581-ab58-b4a97f8947ad" (UID: "3ae97ed0-0d88-4581-ab58-b4a97f8947ad"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.272581 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-scripts" (OuterVolumeSpecName: "scripts") pod "3ae97ed0-0d88-4581-ab58-b4a97f8947ad" (UID: "3ae97ed0-0d88-4581-ab58-b4a97f8947ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.280186 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-kube-api-access-brbn8" (OuterVolumeSpecName: "kube-api-access-brbn8") pod "3ae97ed0-0d88-4581-ab58-b4a97f8947ad" (UID: "3ae97ed0-0d88-4581-ab58-b4a97f8947ad"). InnerVolumeSpecName "kube-api-access-brbn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.282939 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3ae97ed0-0d88-4581-ab58-b4a97f8947ad" (UID: "3ae97ed0-0d88-4581-ab58-b4a97f8947ad"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.284775 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dq4z\" (UniqueName: \"kubernetes.io/projected/c480078e-44d9-46bf-a90b-85e464edbdff-kube-api-access-5dq4z\") pod \"dnsmasq-dns-869f779d85-z9t5s\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.313594 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ae97ed0-0d88-4581-ab58-b4a97f8947ad" (UID: "3ae97ed0-0d88-4581-ab58-b4a97f8947ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.330677 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-config-data" (OuterVolumeSpecName: "config-data") pod "3ae97ed0-0d88-4581-ab58-b4a97f8947ad" (UID: "3ae97ed0-0d88-4581-ab58-b4a97f8947ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.347851 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-config-data\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.347931 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c481587-9ea3-4191-b140-728f6e314195-logs\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.347973 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-config-data-custom\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.348021 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-combined-ca-bundle\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.348050 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nvgw\" (UniqueName: \"kubernetes.io/projected/9c481587-9ea3-4191-b140-728f6e314195-kube-api-access-2nvgw\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.348107 4823 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.348119 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brbn8\" (UniqueName: \"kubernetes.io/projected/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-kube-api-access-brbn8\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.348131 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.348139 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.348149 4823 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.348159 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ae97ed0-0d88-4581-ab58-b4a97f8947ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.348971 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c481587-9ea3-4191-b140-728f6e314195-logs\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.352235 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-combined-ca-bundle\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.353908 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.354448 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-config-data\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.356231 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-config-data-custom\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.367658 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nvgw\" (UniqueName: \"kubernetes.io/projected/9c481587-9ea3-4191-b140-728f6e314195-kube-api-access-2nvgw\") pod \"barbican-api-558766bffd-sp28z\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.385927 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-786584bf8c-z6fpx" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.434786 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.528685 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-qx574" event={"ID":"3ae97ed0-0d88-4581-ab58-b4a97f8947ad","Type":"ContainerDied","Data":"c1f7846ddce442f12937c91c7c2e48c6a8b142b5c513d5eba71b3644dabc6ddc"} Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.528737 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1f7846ddce442f12937c91c7c2e48c6a8b142b5c513d5eba71b3644dabc6ddc" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.528811 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-qx574" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.584200 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.847727 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.866268 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.878778 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-pq89g" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.879862 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.880295 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.881179 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.901889 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.953531 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-z9t5s"] Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.971524 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.971767 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-config-data\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.971979 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4963c458-b673-469c-83e8-96f38561d47c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.972022 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-scripts\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.972052 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq58c\" (UniqueName: \"kubernetes.io/projected/4963c458-b673-469c-83e8-96f38561d47c-kube-api-access-hq58c\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:38 crc kubenswrapper[4823]: I0126 15:07:38.972141 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.001635 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-75d68448b6-k48rf"] Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.017560 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-x2bgk"] Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.020308 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.028306 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-x2bgk"] Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.052337 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-786584bf8c-z6fpx"] Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.074044 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.074140 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-config-data\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.074217 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4963c458-b673-469c-83e8-96f38561d47c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.074239 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-scripts\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.074279 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq58c\" (UniqueName: \"kubernetes.io/projected/4963c458-b673-469c-83e8-96f38561d47c-kube-api-access-hq58c\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.074449 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.075377 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4963c458-b673-469c-83e8-96f38561d47c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.087434 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.090427 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.092007 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-config-data\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.095370 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-scripts\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.106099 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq58c\" (UniqueName: \"kubernetes.io/projected/4963c458-b673-469c-83e8-96f38561d47c-kube-api-access-hq58c\") pod \"cinder-scheduler-0\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.162514 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.164067 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.166619 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.173996 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.177035 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-dns-svc\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.177101 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.177180 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pth7\" (UniqueName: \"kubernetes.io/projected/726afeb0-ed38-4d62-ad73-c0379a57f547-kube-api-access-7pth7\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.177197 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.177469 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-config\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.279449 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe45863-c887-4c4b-a280-3b5411753cdf-logs\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.279574 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-dns-svc\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.279646 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.279677 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-config-data\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.279701 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.279767 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pth7\" (UniqueName: \"kubernetes.io/projected/726afeb0-ed38-4d62-ad73-c0379a57f547-kube-api-access-7pth7\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.279815 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.279931 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fe45863-c887-4c4b-a280-3b5411753cdf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.280148 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-scripts\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.280279 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-config\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.280388 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-config-data-custom\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.280441 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n2hb\" (UniqueName: \"kubernetes.io/projected/0fe45863-c887-4c4b-a280-3b5411753cdf-kube-api-access-4n2hb\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.281330 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-dns-svc\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.281489 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.283750 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-z9t5s"] Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.283796 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: W0126 15:07:39.286861 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc480078e_44d9_46bf_a90b_85e464edbdff.slice/crio-8fe3e68b8c56c33a441dda5c3ffa291ac815b75860bca4951e45a7496e2bd70a WatchSource:0}: Error finding container 8fe3e68b8c56c33a441dda5c3ffa291ac815b75860bca4951e45a7496e2bd70a: Status 404 returned error can't find the container with id 8fe3e68b8c56c33a441dda5c3ffa291ac815b75860bca4951e45a7496e2bd70a Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.294889 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-config\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.301842 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pth7\" (UniqueName: \"kubernetes.io/projected/726afeb0-ed38-4d62-ad73-c0379a57f547-kube-api-access-7pth7\") pod \"dnsmasq-dns-58db5546cc-x2bgk\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.306448 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.381967 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.382497 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fe45863-c887-4c4b-a280-3b5411753cdf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.382550 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-scripts\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.382607 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-config-data-custom\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.382627 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fe45863-c887-4c4b-a280-3b5411753cdf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.382634 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n2hb\" (UniqueName: \"kubernetes.io/projected/0fe45863-c887-4c4b-a280-3b5411753cdf-kube-api-access-4n2hb\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.382715 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe45863-c887-4c4b-a280-3b5411753cdf-logs\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.382754 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.382773 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-config-data\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.385069 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe45863-c887-4c4b-a280-3b5411753cdf-logs\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.388362 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-config-data\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.399240 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-scripts\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.399482 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-config-data-custom\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.401487 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.403986 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n2hb\" (UniqueName: \"kubernetes.io/projected/0fe45863-c887-4c4b-a280-3b5411753cdf-kube-api-access-4n2hb\") pod \"cinder-api-0\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.500241 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.608917 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-z9t5s" event={"ID":"c480078e-44d9-46bf-a90b-85e464edbdff","Type":"ContainerStarted","Data":"8fe3e68b8c56c33a441dda5c3ffa291ac815b75860bca4951e45a7496e2bd70a"} Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.609448 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-558766bffd-sp28z"] Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.609472 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" event={"ID":"1f23e9d9-0eae-4911-af41-a71424a974f7","Type":"ContainerStarted","Data":"dfd02f4c1fb36b8eaf278a74bf2898127b3cf39b2c3a13bd0e5a3313ee366a94"} Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.609488 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-786584bf8c-z6fpx" event={"ID":"636bf50c-43c9-4d39-af26-187a531e84ad","Type":"ContainerStarted","Data":"eacde235e0418b607f38c0385a703d61430b89ad145791dd2864fcc5df550913"} Jan 26 15:07:39 crc kubenswrapper[4823]: W0126 15:07:39.611392 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9c481587_9ea3_4191_b140_728f6e314195.slice/crio-66976e5b725d647c09e277a0e5452e489abe24c758d0f441db52bc3958407849 WatchSource:0}: Error finding container 66976e5b725d647c09e277a0e5452e489abe24c758d0f441db52bc3958407849: Status 404 returned error can't find the container with id 66976e5b725d647c09e277a0e5452e489abe24c758d0f441db52bc3958407849 Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.613944 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc957d6-b6e5-4fad-91cb-e78f450611c9","Type":"ContainerStarted","Data":"73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297"} Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.866652 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:07:39 crc kubenswrapper[4823]: I0126 15:07:39.969933 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-x2bgk"] Jan 26 15:07:39 crc kubenswrapper[4823]: W0126 15:07:39.980509 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod726afeb0_ed38_4d62_ad73_c0379a57f547.slice/crio-855595e2ecb63009fca3cff844640f87d928c4d5e4e4f6b300deb1b9b0b78b55 WatchSource:0}: Error finding container 855595e2ecb63009fca3cff844640f87d928c4d5e4e4f6b300deb1b9b0b78b55: Status 404 returned error can't find the container with id 855595e2ecb63009fca3cff844640f87d928c4d5e4e4f6b300deb1b9b0b78b55 Jan 26 15:07:40 crc kubenswrapper[4823]: I0126 15:07:40.120217 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:07:40 crc kubenswrapper[4823]: W0126 15:07:40.126381 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fe45863_c887_4c4b_a280_3b5411753cdf.slice/crio-2a9efa8e5c0767240a9b8462d16cd60c3764051ae60e2e93d2ce9e4167c28972 WatchSource:0}: Error finding container 2a9efa8e5c0767240a9b8462d16cd60c3764051ae60e2e93d2ce9e4167c28972: Status 404 returned error can't find the container with id 2a9efa8e5c0767240a9b8462d16cd60c3764051ae60e2e93d2ce9e4167c28972 Jan 26 15:07:40 crc kubenswrapper[4823]: I0126 15:07:40.628738 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-558766bffd-sp28z" event={"ID":"9c481587-9ea3-4191-b140-728f6e314195","Type":"ContainerStarted","Data":"df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176"} Jan 26 15:07:40 crc kubenswrapper[4823]: I0126 15:07:40.629069 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-558766bffd-sp28z" event={"ID":"9c481587-9ea3-4191-b140-728f6e314195","Type":"ContainerStarted","Data":"66976e5b725d647c09e277a0e5452e489abe24c758d0f441db52bc3958407849"} Jan 26 15:07:40 crc kubenswrapper[4823]: I0126 15:07:40.631618 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0fe45863-c887-4c4b-a280-3b5411753cdf","Type":"ContainerStarted","Data":"2a9efa8e5c0767240a9b8462d16cd60c3764051ae60e2e93d2ce9e4167c28972"} Jan 26 15:07:40 crc kubenswrapper[4823]: I0126 15:07:40.635760 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc957d6-b6e5-4fad-91cb-e78f450611c9","Type":"ContainerStarted","Data":"59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605"} Jan 26 15:07:40 crc kubenswrapper[4823]: I0126 15:07:40.638223 4823 generic.go:334] "Generic (PLEG): container finished" podID="726afeb0-ed38-4d62-ad73-c0379a57f547" containerID="4fd2489bb2f81324a275b2ac6f3d7d00c7a3a379fe4efbacd52e958072146a40" exitCode=0 Jan 26 15:07:40 crc kubenswrapper[4823]: I0126 15:07:40.638276 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" event={"ID":"726afeb0-ed38-4d62-ad73-c0379a57f547","Type":"ContainerDied","Data":"4fd2489bb2f81324a275b2ac6f3d7d00c7a3a379fe4efbacd52e958072146a40"} Jan 26 15:07:40 crc kubenswrapper[4823]: I0126 15:07:40.638296 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" event={"ID":"726afeb0-ed38-4d62-ad73-c0379a57f547","Type":"ContainerStarted","Data":"855595e2ecb63009fca3cff844640f87d928c4d5e4e4f6b300deb1b9b0b78b55"} Jan 26 15:07:40 crc kubenswrapper[4823]: I0126 15:07:40.642560 4823 generic.go:334] "Generic (PLEG): container finished" podID="c480078e-44d9-46bf-a90b-85e464edbdff" containerID="13a6624dfc0a77ed1182af07347c099c79115b7f617f54873c1160d450f81513" exitCode=0 Jan 26 15:07:40 crc kubenswrapper[4823]: I0126 15:07:40.642623 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-z9t5s" event={"ID":"c480078e-44d9-46bf-a90b-85e464edbdff","Type":"ContainerDied","Data":"13a6624dfc0a77ed1182af07347c099c79115b7f617f54873c1160d450f81513"} Jan 26 15:07:40 crc kubenswrapper[4823]: I0126 15:07:40.652297 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4963c458-b673-469c-83e8-96f38561d47c","Type":"ContainerStarted","Data":"3f3f563eb393880e3db4181cbc13ffad849f38517574d83f1823cd13d118979e"} Jan 26 15:07:40 crc kubenswrapper[4823]: I0126 15:07:40.865175 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.336712 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.435182 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-ovsdbserver-sb\") pod \"c480078e-44d9-46bf-a90b-85e464edbdff\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.435626 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-ovsdbserver-nb\") pod \"c480078e-44d9-46bf-a90b-85e464edbdff\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.435849 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-dns-svc\") pod \"c480078e-44d9-46bf-a90b-85e464edbdff\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.435957 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dq4z\" (UniqueName: \"kubernetes.io/projected/c480078e-44d9-46bf-a90b-85e464edbdff-kube-api-access-5dq4z\") pod \"c480078e-44d9-46bf-a90b-85e464edbdff\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.436038 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-config\") pod \"c480078e-44d9-46bf-a90b-85e464edbdff\" (UID: \"c480078e-44d9-46bf-a90b-85e464edbdff\") " Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.442793 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c480078e-44d9-46bf-a90b-85e464edbdff-kube-api-access-5dq4z" (OuterVolumeSpecName: "kube-api-access-5dq4z") pod "c480078e-44d9-46bf-a90b-85e464edbdff" (UID: "c480078e-44d9-46bf-a90b-85e464edbdff"). InnerVolumeSpecName "kube-api-access-5dq4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.471922 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-config" (OuterVolumeSpecName: "config") pod "c480078e-44d9-46bf-a90b-85e464edbdff" (UID: "c480078e-44d9-46bf-a90b-85e464edbdff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.479112 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c480078e-44d9-46bf-a90b-85e464edbdff" (UID: "c480078e-44d9-46bf-a90b-85e464edbdff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.479476 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c480078e-44d9-46bf-a90b-85e464edbdff" (UID: "c480078e-44d9-46bf-a90b-85e464edbdff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.501103 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c480078e-44d9-46bf-a90b-85e464edbdff" (UID: "c480078e-44d9-46bf-a90b-85e464edbdff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.538563 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.538626 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dq4z\" (UniqueName: \"kubernetes.io/projected/c480078e-44d9-46bf-a90b-85e464edbdff-kube-api-access-5dq4z\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.538642 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.538652 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.538661 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c480078e-44d9-46bf-a90b-85e464edbdff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.661968 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-z9t5s" event={"ID":"c480078e-44d9-46bf-a90b-85e464edbdff","Type":"ContainerDied","Data":"8fe3e68b8c56c33a441dda5c3ffa291ac815b75860bca4951e45a7496e2bd70a"} Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.662029 4823 scope.go:117] "RemoveContainer" containerID="13a6624dfc0a77ed1182af07347c099c79115b7f617f54873c1160d450f81513" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.662141 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-z9t5s" Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.712312 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-z9t5s"] Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.722873 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-z9t5s"] Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.797264 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:07:41 crc kubenswrapper[4823]: I0126 15:07:41.801677 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75dbc957cb-ckfwc" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Jan 26 15:07:42 crc kubenswrapper[4823]: I0126 15:07:42.334392 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-76d987df64-77wdm" Jan 26 15:07:42 crc kubenswrapper[4823]: I0126 15:07:42.681929 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-558766bffd-sp28z" event={"ID":"9c481587-9ea3-4191-b140-728f6e314195","Type":"ContainerStarted","Data":"348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252"} Jan 26 15:07:42 crc kubenswrapper[4823]: I0126 15:07:42.683530 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:42 crc kubenswrapper[4823]: I0126 15:07:42.683669 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:42 crc kubenswrapper[4823]: I0126 15:07:42.693763 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0fe45863-c887-4c4b-a280-3b5411753cdf","Type":"ContainerStarted","Data":"0891116c8c4a490a2c171bbccd8f3698c96f38f3c7c080ac7077e0595e4a2140"} Jan 26 15:07:42 crc kubenswrapper[4823]: I0126 15:07:42.698339 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" event={"ID":"726afeb0-ed38-4d62-ad73-c0379a57f547","Type":"ContainerStarted","Data":"26f087c98a6eb40df28a85d4ef864ea5191bae88c6133cb412138d310c620805"} Jan 26 15:07:42 crc kubenswrapper[4823]: I0126 15:07:42.699172 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:42 crc kubenswrapper[4823]: I0126 15:07:42.722004 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-558766bffd-sp28z" podStartSLOduration=4.721976844 podStartE2EDuration="4.721976844s" podCreationTimestamp="2026-01-26 15:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:07:42.714810109 +0000 UTC m=+1259.400273224" watchObservedRunningTime="2026-01-26 15:07:42.721976844 +0000 UTC m=+1259.407439949" Jan 26 15:07:42 crc kubenswrapper[4823]: I0126 15:07:42.762307 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" podStartSLOduration=4.762276916 podStartE2EDuration="4.762276916s" podCreationTimestamp="2026-01-26 15:07:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:07:42.758965825 +0000 UTC m=+1259.444428930" watchObservedRunningTime="2026-01-26 15:07:42.762276916 +0000 UTC m=+1259.447740021" Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.271764 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-854b94d7cf-txq64" Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.361869 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cc598b456-74bc7"] Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.362506 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cc598b456-74bc7" podUID="57e802a1-56bd-42e5-b02b-15877d9a33e3" containerName="neutron-api" containerID="cri-o://739316c7f2f47efce5699a6a4070b6f0af115c40ada29cddb689c6c24f6b46ee" gracePeriod=30 Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.362663 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5cc598b456-74bc7" podUID="57e802a1-56bd-42e5-b02b-15877d9a33e3" containerName="neutron-httpd" containerID="cri-o://f0ac807f6237526bbbe0dd9e8b1cac05871d02fce4ad5dc5f9449d2f17d469ad" gracePeriod=30 Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.592028 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c480078e-44d9-46bf-a90b-85e464edbdff" path="/var/lib/kubelet/pods/c480078e-44d9-46bf-a90b-85e464edbdff/volumes" Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.731102 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" event={"ID":"1f23e9d9-0eae-4911-af41-a71424a974f7","Type":"ContainerStarted","Data":"fdcc2d3d4054a55e4a27826df0b3f44d1073d6dcdeab300a6365d178ec6b4628"} Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.731604 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" event={"ID":"1f23e9d9-0eae-4911-af41-a71424a974f7","Type":"ContainerStarted","Data":"3f1d0d83d82dd4e763359371137825b91546dbe0a5a783fd05cb5d4f3fc0f8e0"} Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.760234 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-786584bf8c-z6fpx" event={"ID":"636bf50c-43c9-4d39-af26-187a531e84ad","Type":"ContainerStarted","Data":"8b5768db9ae4a7c6da86713c6b8690eae6fa84c561a53c79a0c731fd2325fb75"} Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.761077 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-786584bf8c-z6fpx" event={"ID":"636bf50c-43c9-4d39-af26-187a531e84ad","Type":"ContainerStarted","Data":"4cabc6b23f2a70dfea0fd8f7a51b6637b610640572875f03c04d9ad9b9a2d8d1"} Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.785141 4823 generic.go:334] "Generic (PLEG): container finished" podID="57e802a1-56bd-42e5-b02b-15877d9a33e3" containerID="f0ac807f6237526bbbe0dd9e8b1cac05871d02fce4ad5dc5f9449d2f17d469ad" exitCode=0 Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.785231 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cc598b456-74bc7" event={"ID":"57e802a1-56bd-42e5-b02b-15877d9a33e3","Type":"ContainerDied","Data":"f0ac807f6237526bbbe0dd9e8b1cac05871d02fce4ad5dc5f9449d2f17d469ad"} Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.788589 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-75d68448b6-k48rf" podStartSLOduration=3.38384918 podStartE2EDuration="6.788542007s" podCreationTimestamp="2026-01-26 15:07:37 +0000 UTC" firstStartedPulling="2026-01-26 15:07:38.969126905 +0000 UTC m=+1255.654590010" lastFinishedPulling="2026-01-26 15:07:42.373819742 +0000 UTC m=+1259.059282837" observedRunningTime="2026-01-26 15:07:43.746351344 +0000 UTC m=+1260.431814449" watchObservedRunningTime="2026-01-26 15:07:43.788542007 +0000 UTC m=+1260.474005112" Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.798883 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-786584bf8c-z6fpx" podStartSLOduration=3.500655772 podStartE2EDuration="6.798784567s" podCreationTimestamp="2026-01-26 15:07:37 +0000 UTC" firstStartedPulling="2026-01-26 15:07:39.077345732 +0000 UTC m=+1255.762808837" lastFinishedPulling="2026-01-26 15:07:42.375474527 +0000 UTC m=+1259.060937632" observedRunningTime="2026-01-26 15:07:43.788903366 +0000 UTC m=+1260.474366491" watchObservedRunningTime="2026-01-26 15:07:43.798784567 +0000 UTC m=+1260.484247672" Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.823993 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4963c458-b673-469c-83e8-96f38561d47c","Type":"ContainerStarted","Data":"8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8"} Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.853166 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0fe45863-c887-4c4b-a280-3b5411753cdf" containerName="cinder-api-log" containerID="cri-o://0891116c8c4a490a2c171bbccd8f3698c96f38f3c7c080ac7077e0595e4a2140" gracePeriod=30 Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.853343 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0fe45863-c887-4c4b-a280-3b5411753cdf" containerName="cinder-api" containerID="cri-o://fd45110f2146ffb120a08e80e4c25289cc9c0afa6ba428e1d17ad8bc1322ccdf" gracePeriod=30 Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.853569 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0fe45863-c887-4c4b-a280-3b5411753cdf","Type":"ContainerStarted","Data":"fd45110f2146ffb120a08e80e4c25289cc9c0afa6ba428e1d17ad8bc1322ccdf"} Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.853638 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.894088 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc957d6-b6e5-4fad-91cb-e78f450611c9","Type":"ContainerStarted","Data":"c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697"} Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.894607 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.949781 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.937434545 podStartE2EDuration="8.949759511s" podCreationTimestamp="2026-01-26 15:07:35 +0000 UTC" firstStartedPulling="2026-01-26 15:07:36.369830804 +0000 UTC m=+1253.055293929" lastFinishedPulling="2026-01-26 15:07:42.38215579 +0000 UTC m=+1259.067618895" observedRunningTime="2026-01-26 15:07:43.946523783 +0000 UTC m=+1260.631986898" watchObservedRunningTime="2026-01-26 15:07:43.949759511 +0000 UTC m=+1260.635222616" Jan 26 15:07:43 crc kubenswrapper[4823]: I0126 15:07:43.960314 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.960288879 podStartE2EDuration="4.960288879s" podCreationTimestamp="2026-01-26 15:07:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:07:43.915741782 +0000 UTC m=+1260.601204887" watchObservedRunningTime="2026-01-26 15:07:43.960288879 +0000 UTC m=+1260.645751984" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.717649 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-67c696b96b-69j89"] Jan 26 15:07:44 crc kubenswrapper[4823]: E0126 15:07:44.727907 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c480078e-44d9-46bf-a90b-85e464edbdff" containerName="init" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.727962 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c480078e-44d9-46bf-a90b-85e464edbdff" containerName="init" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.729007 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c480078e-44d9-46bf-a90b-85e464edbdff" containerName="init" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.731073 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.736968 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.737161 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.741775 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-67c696b96b-69j89"] Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.884455 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-combined-ca-bundle\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.884898 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-config-data\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.884928 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-public-tls-certs\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.884945 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nl9m\" (UniqueName: \"kubernetes.io/projected/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-kube-api-access-9nl9m\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.884985 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-logs\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.885007 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-config-data-custom\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.886902 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-internal-tls-certs\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.945936 4823 generic.go:334] "Generic (PLEG): container finished" podID="0fe45863-c887-4c4b-a280-3b5411753cdf" containerID="fd45110f2146ffb120a08e80e4c25289cc9c0afa6ba428e1d17ad8bc1322ccdf" exitCode=0 Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.945985 4823 generic.go:334] "Generic (PLEG): container finished" podID="0fe45863-c887-4c4b-a280-3b5411753cdf" containerID="0891116c8c4a490a2c171bbccd8f3698c96f38f3c7c080ac7077e0595e4a2140" exitCode=143 Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.946054 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0fe45863-c887-4c4b-a280-3b5411753cdf","Type":"ContainerDied","Data":"fd45110f2146ffb120a08e80e4c25289cc9c0afa6ba428e1d17ad8bc1322ccdf"} Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.946136 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0fe45863-c887-4c4b-a280-3b5411753cdf","Type":"ContainerDied","Data":"0891116c8c4a490a2c171bbccd8f3698c96f38f3c7c080ac7077e0595e4a2140"} Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.965391 4823 generic.go:334] "Generic (PLEG): container finished" podID="57e802a1-56bd-42e5-b02b-15877d9a33e3" containerID="739316c7f2f47efce5699a6a4070b6f0af115c40ada29cddb689c6c24f6b46ee" exitCode=0 Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.965508 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cc598b456-74bc7" event={"ID":"57e802a1-56bd-42e5-b02b-15877d9a33e3","Type":"ContainerDied","Data":"739316c7f2f47efce5699a6a4070b6f0af115c40ada29cddb689c6c24f6b46ee"} Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.988852 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-config-data\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.989013 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nl9m\" (UniqueName: \"kubernetes.io/projected/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-kube-api-access-9nl9m\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.989090 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-public-tls-certs\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.989157 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-logs\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.989223 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-config-data-custom\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.989474 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-internal-tls-certs\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.993449 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-combined-ca-bundle\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.993358 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4963c458-b673-469c-83e8-96f38561d47c","Type":"ContainerStarted","Data":"66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999"} Jan 26 15:07:44 crc kubenswrapper[4823]: I0126 15:07:44.997514 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-logs\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.003210 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-config-data\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.013977 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-combined-ca-bundle\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.018731 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-public-tls-certs\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.023017 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-internal-tls-certs\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.031125 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-config-data-custom\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.037219 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nl9m\" (UniqueName: \"kubernetes.io/projected/1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f-kube-api-access-9nl9m\") pod \"barbican-api-67c696b96b-69j89\" (UID: \"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f\") " pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.073858 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.141759 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.162654 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.53888073 podStartE2EDuration="7.16262562s" podCreationTimestamp="2026-01-26 15:07:38 +0000 UTC" firstStartedPulling="2026-01-26 15:07:39.878221714 +0000 UTC m=+1256.563684849" lastFinishedPulling="2026-01-26 15:07:42.501966634 +0000 UTC m=+1259.187429739" observedRunningTime="2026-01-26 15:07:45.015604064 +0000 UTC m=+1261.701067169" watchObservedRunningTime="2026-01-26 15:07:45.16262562 +0000 UTC m=+1261.848088725" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.250926 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 26 15:07:45 crc kubenswrapper[4823]: E0126 15:07:45.251518 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e802a1-56bd-42e5-b02b-15877d9a33e3" containerName="neutron-api" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.251542 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e802a1-56bd-42e5-b02b-15877d9a33e3" containerName="neutron-api" Jan 26 15:07:45 crc kubenswrapper[4823]: E0126 15:07:45.251558 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57e802a1-56bd-42e5-b02b-15877d9a33e3" containerName="neutron-httpd" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.251565 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e802a1-56bd-42e5-b02b-15877d9a33e3" containerName="neutron-httpd" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.251749 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e802a1-56bd-42e5-b02b-15877d9a33e3" containerName="neutron-httpd" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.251769 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="57e802a1-56bd-42e5-b02b-15877d9a33e3" containerName="neutron-api" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.252615 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.257826 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.258069 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.258299 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-jnmrt" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.272585 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.305012 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-ovndb-tls-certs\") pod \"57e802a1-56bd-42e5-b02b-15877d9a33e3\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.305087 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-config\") pod \"57e802a1-56bd-42e5-b02b-15877d9a33e3\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.305213 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8gwl\" (UniqueName: \"kubernetes.io/projected/57e802a1-56bd-42e5-b02b-15877d9a33e3-kube-api-access-p8gwl\") pod \"57e802a1-56bd-42e5-b02b-15877d9a33e3\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.305269 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-combined-ca-bundle\") pod \"57e802a1-56bd-42e5-b02b-15877d9a33e3\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.305316 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-httpd-config\") pod \"57e802a1-56bd-42e5-b02b-15877d9a33e3\" (UID: \"57e802a1-56bd-42e5-b02b-15877d9a33e3\") " Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.315709 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "57e802a1-56bd-42e5-b02b-15877d9a33e3" (UID: "57e802a1-56bd-42e5-b02b-15877d9a33e3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.315780 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.317674 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e802a1-56bd-42e5-b02b-15877d9a33e3-kube-api-access-p8gwl" (OuterVolumeSpecName: "kube-api-access-p8gwl") pod "57e802a1-56bd-42e5-b02b-15877d9a33e3" (UID: "57e802a1-56bd-42e5-b02b-15877d9a33e3"). InnerVolumeSpecName "kube-api-access-p8gwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.390900 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57e802a1-56bd-42e5-b02b-15877d9a33e3" (UID: "57e802a1-56bd-42e5-b02b-15877d9a33e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.408098 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fe45863-c887-4c4b-a280-3b5411753cdf-etc-machine-id\") pod \"0fe45863-c887-4c4b-a280-3b5411753cdf\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.408163 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe45863-c887-4c4b-a280-3b5411753cdf-logs\") pod \"0fe45863-c887-4c4b-a280-3b5411753cdf\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.408264 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-config-data\") pod \"0fe45863-c887-4c4b-a280-3b5411753cdf\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.408291 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-combined-ca-bundle\") pod \"0fe45863-c887-4c4b-a280-3b5411753cdf\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.408324 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-scripts\") pod \"0fe45863-c887-4c4b-a280-3b5411753cdf\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.408392 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-config-data-custom\") pod \"0fe45863-c887-4c4b-a280-3b5411753cdf\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.409077 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n2hb\" (UniqueName: \"kubernetes.io/projected/0fe45863-c887-4c4b-a280-3b5411753cdf-kube-api-access-4n2hb\") pod \"0fe45863-c887-4c4b-a280-3b5411753cdf\" (UID: \"0fe45863-c887-4c4b-a280-3b5411753cdf\") " Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.417790 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gmvv\" (UniqueName: \"kubernetes.io/projected/63a706b5-54fa-4b1a-a755-04a5a7a52973-kube-api-access-8gmvv\") pod \"openstackclient\" (UID: \"63a706b5-54fa-4b1a-a755-04a5a7a52973\") " pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.417850 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a706b5-54fa-4b1a-a755-04a5a7a52973-combined-ca-bundle\") pod \"openstackclient\" (UID: \"63a706b5-54fa-4b1a-a755-04a5a7a52973\") " pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.417887 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/63a706b5-54fa-4b1a-a755-04a5a7a52973-openstack-config\") pod \"openstackclient\" (UID: \"63a706b5-54fa-4b1a-a755-04a5a7a52973\") " pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.418155 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/63a706b5-54fa-4b1a-a755-04a5a7a52973-openstack-config-secret\") pod \"openstackclient\" (UID: \"63a706b5-54fa-4b1a-a755-04a5a7a52973\") " pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.418316 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8gwl\" (UniqueName: \"kubernetes.io/projected/57e802a1-56bd-42e5-b02b-15877d9a33e3-kube-api-access-p8gwl\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.418335 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.418348 4823 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.418466 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0fe45863-c887-4c4b-a280-3b5411753cdf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0fe45863-c887-4c4b-a280-3b5411753cdf" (UID: "0fe45863-c887-4c4b-a280-3b5411753cdf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.419345 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fe45863-c887-4c4b-a280-3b5411753cdf-logs" (OuterVolumeSpecName: "logs") pod "0fe45863-c887-4c4b-a280-3b5411753cdf" (UID: "0fe45863-c887-4c4b-a280-3b5411753cdf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.423510 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe45863-c887-4c4b-a280-3b5411753cdf-kube-api-access-4n2hb" (OuterVolumeSpecName: "kube-api-access-4n2hb") pod "0fe45863-c887-4c4b-a280-3b5411753cdf" (UID: "0fe45863-c887-4c4b-a280-3b5411753cdf"). InnerVolumeSpecName "kube-api-access-4n2hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.423676 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-config" (OuterVolumeSpecName: "config") pod "57e802a1-56bd-42e5-b02b-15877d9a33e3" (UID: "57e802a1-56bd-42e5-b02b-15877d9a33e3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.425415 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-scripts" (OuterVolumeSpecName: "scripts") pod "0fe45863-c887-4c4b-a280-3b5411753cdf" (UID: "0fe45863-c887-4c4b-a280-3b5411753cdf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.429083 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0fe45863-c887-4c4b-a280-3b5411753cdf" (UID: "0fe45863-c887-4c4b-a280-3b5411753cdf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.463033 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0fe45863-c887-4c4b-a280-3b5411753cdf" (UID: "0fe45863-c887-4c4b-a280-3b5411753cdf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.468871 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "57e802a1-56bd-42e5-b02b-15877d9a33e3" (UID: "57e802a1-56bd-42e5-b02b-15877d9a33e3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.479868 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-config-data" (OuterVolumeSpecName: "config-data") pod "0fe45863-c887-4c4b-a280-3b5411753cdf" (UID: "0fe45863-c887-4c4b-a280-3b5411753cdf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.520425 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gmvv\" (UniqueName: \"kubernetes.io/projected/63a706b5-54fa-4b1a-a755-04a5a7a52973-kube-api-access-8gmvv\") pod \"openstackclient\" (UID: \"63a706b5-54fa-4b1a-a755-04a5a7a52973\") " pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.520539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a706b5-54fa-4b1a-a755-04a5a7a52973-combined-ca-bundle\") pod \"openstackclient\" (UID: \"63a706b5-54fa-4b1a-a755-04a5a7a52973\") " pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.520598 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/63a706b5-54fa-4b1a-a755-04a5a7a52973-openstack-config\") pod \"openstackclient\" (UID: \"63a706b5-54fa-4b1a-a755-04a5a7a52973\") " pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.528429 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63a706b5-54fa-4b1a-a755-04a5a7a52973-combined-ca-bundle\") pod \"openstackclient\" (UID: \"63a706b5-54fa-4b1a-a755-04a5a7a52973\") " pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.535052 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/63a706b5-54fa-4b1a-a755-04a5a7a52973-openstack-config\") pod \"openstackclient\" (UID: \"63a706b5-54fa-4b1a-a755-04a5a7a52973\") " pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.535314 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/63a706b5-54fa-4b1a-a755-04a5a7a52973-openstack-config-secret\") pod \"openstackclient\" (UID: \"63a706b5-54fa-4b1a-a755-04a5a7a52973\") " pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.535870 4823 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0fe45863-c887-4c4b-a280-3b5411753cdf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.535891 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe45863-c887-4c4b-a280-3b5411753cdf-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.535901 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.535911 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.535919 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.535928 4823 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0fe45863-c887-4c4b-a280-3b5411753cdf-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.535938 4823 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.535951 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n2hb\" (UniqueName: \"kubernetes.io/projected/0fe45863-c887-4c4b-a280-3b5411753cdf-kube-api-access-4n2hb\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.535963 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/57e802a1-56bd-42e5-b02b-15877d9a33e3-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.538795 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gmvv\" (UniqueName: \"kubernetes.io/projected/63a706b5-54fa-4b1a-a755-04a5a7a52973-kube-api-access-8gmvv\") pod \"openstackclient\" (UID: \"63a706b5-54fa-4b1a-a755-04a5a7a52973\") " pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.540947 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/63a706b5-54fa-4b1a-a755-04a5a7a52973-openstack-config-secret\") pod \"openstackclient\" (UID: \"63a706b5-54fa-4b1a-a755-04a5a7a52973\") " pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.605655 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 26 15:07:45 crc kubenswrapper[4823]: I0126 15:07:45.814289 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-67c696b96b-69j89"] Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.002978 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0fe45863-c887-4c4b-a280-3b5411753cdf","Type":"ContainerDied","Data":"2a9efa8e5c0767240a9b8462d16cd60c3764051ae60e2e93d2ce9e4167c28972"} Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.003299 4823 scope.go:117] "RemoveContainer" containerID="fd45110f2146ffb120a08e80e4c25289cc9c0afa6ba428e1d17ad8bc1322ccdf" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.003015 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.010167 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67c696b96b-69j89" event={"ID":"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f","Type":"ContainerStarted","Data":"3e7b1861a39d9ee30b8427e2d7ddc5523e4e2f32db4ba45f6e2db2d56f8b947a"} Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.014583 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cc598b456-74bc7" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.014775 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cc598b456-74bc7" event={"ID":"57e802a1-56bd-42e5-b02b-15877d9a33e3","Type":"ContainerDied","Data":"5f02d9168a0cb48c575ef2a5410624e0ed1b18ad9c084633a57338144ebe2f54"} Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.101460 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5cc598b456-74bc7"] Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.112237 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5cc598b456-74bc7"] Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.123547 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.136137 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.137320 4823 scope.go:117] "RemoveContainer" containerID="0891116c8c4a490a2c171bbccd8f3698c96f38f3c7c080ac7077e0595e4a2140" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.142646 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.175004 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:07:46 crc kubenswrapper[4823]: E0126 15:07:46.175557 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe45863-c887-4c4b-a280-3b5411753cdf" containerName="cinder-api-log" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.175577 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe45863-c887-4c4b-a280-3b5411753cdf" containerName="cinder-api-log" Jan 26 15:07:46 crc kubenswrapper[4823]: E0126 15:07:46.175600 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe45863-c887-4c4b-a280-3b5411753cdf" containerName="cinder-api" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.175607 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe45863-c887-4c4b-a280-3b5411753cdf" containerName="cinder-api" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.175855 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe45863-c887-4c4b-a280-3b5411753cdf" containerName="cinder-api" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.175880 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe45863-c887-4c4b-a280-3b5411753cdf" containerName="cinder-api-log" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.182893 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.187082 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.187440 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.187731 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.242244 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.342611 4823 scope.go:117] "RemoveContainer" containerID="f0ac807f6237526bbbe0dd9e8b1cac05871d02fce4ad5dc5f9449d2f17d469ad" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.353919 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.353998 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39eda835-a007-4e12-8a6a-86100eb17105-logs\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.354039 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.354070 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-public-tls-certs\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.354119 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-config-data\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.354196 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39eda835-a007-4e12-8a6a-86100eb17105-etc-machine-id\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.354275 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-scripts\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.354308 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm8c2\" (UniqueName: \"kubernetes.io/projected/39eda835-a007-4e12-8a6a-86100eb17105-kube-api-access-jm8c2\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.354330 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-config-data-custom\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.377453 4823 scope.go:117] "RemoveContainer" containerID="739316c7f2f47efce5699a6a4070b6f0af115c40ada29cddb689c6c24f6b46ee" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.456225 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.456440 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-public-tls-certs\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.456545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-config-data\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.456655 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39eda835-a007-4e12-8a6a-86100eb17105-etc-machine-id\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.456739 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-scripts\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.456821 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm8c2\" (UniqueName: \"kubernetes.io/projected/39eda835-a007-4e12-8a6a-86100eb17105-kube-api-access-jm8c2\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.456899 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-config-data-custom\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.456746 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/39eda835-a007-4e12-8a6a-86100eb17105-etc-machine-id\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.456990 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.457265 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39eda835-a007-4e12-8a6a-86100eb17105-logs\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.458060 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39eda835-a007-4e12-8a6a-86100eb17105-logs\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.465348 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-config-data-custom\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.465528 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-scripts\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.465798 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-public-tls-certs\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.465904 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.472296 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.475167 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39eda835-a007-4e12-8a6a-86100eb17105-config-data\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.476937 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm8c2\" (UniqueName: \"kubernetes.io/projected/39eda835-a007-4e12-8a6a-86100eb17105-kube-api-access-jm8c2\") pod \"cinder-api-0\" (UID: \"39eda835-a007-4e12-8a6a-86100eb17105\") " pod="openstack/cinder-api-0" Jan 26 15:07:46 crc kubenswrapper[4823]: I0126 15:07:46.659662 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 26 15:07:47 crc kubenswrapper[4823]: I0126 15:07:47.040466 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"63a706b5-54fa-4b1a-a755-04a5a7a52973","Type":"ContainerStarted","Data":"9b5528f96a52f292f3c01b50c661170648bf9a51ac1d8c49e7cb5f36e22d7fb2"} Jan 26 15:07:47 crc kubenswrapper[4823]: I0126 15:07:47.050120 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67c696b96b-69j89" event={"ID":"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f","Type":"ContainerStarted","Data":"66a4423d65c78a348e5d8e4de4d929a1989b88ad98788b77f858e236515ea51e"} Jan 26 15:07:47 crc kubenswrapper[4823]: I0126 15:07:47.050179 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-67c696b96b-69j89" event={"ID":"1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f","Type":"ContainerStarted","Data":"cee998ec0a0d331dd81ea4e34949fb666ded2e6277aa01628850c7e023ad27aa"} Jan 26 15:07:47 crc kubenswrapper[4823]: I0126 15:07:47.052648 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:47 crc kubenswrapper[4823]: I0126 15:07:47.052696 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:47 crc kubenswrapper[4823]: I0126 15:07:47.077474 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-67c696b96b-69j89" podStartSLOduration=3.07745115 podStartE2EDuration="3.07745115s" podCreationTimestamp="2026-01-26 15:07:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:07:47.071670672 +0000 UTC m=+1263.757133777" watchObservedRunningTime="2026-01-26 15:07:47.07745115 +0000 UTC m=+1263.762914255" Jan 26 15:07:47 crc kubenswrapper[4823]: I0126 15:07:47.305047 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 26 15:07:47 crc kubenswrapper[4823]: I0126 15:07:47.575909 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fe45863-c887-4c4b-a280-3b5411753cdf" path="/var/lib/kubelet/pods/0fe45863-c887-4c4b-a280-3b5411753cdf/volumes" Jan 26 15:07:47 crc kubenswrapper[4823]: I0126 15:07:47.577226 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57e802a1-56bd-42e5-b02b-15877d9a33e3" path="/var/lib/kubelet/pods/57e802a1-56bd-42e5-b02b-15877d9a33e3/volumes" Jan 26 15:07:48 crc kubenswrapper[4823]: I0126 15:07:48.063849 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39eda835-a007-4e12-8a6a-86100eb17105","Type":"ContainerStarted","Data":"e7c27ba6e12e27a690d9d1fb77037993ab2e8b984a4a87b0555b8d42b375857f"} Jan 26 15:07:48 crc kubenswrapper[4823]: I0126 15:07:48.063930 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39eda835-a007-4e12-8a6a-86100eb17105","Type":"ContainerStarted","Data":"8722b4d3be77df565395ade91a66956aa970d5640db145b928ade88e01f05b7f"} Jan 26 15:07:49 crc kubenswrapper[4823]: I0126 15:07:49.078971 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"39eda835-a007-4e12-8a6a-86100eb17105","Type":"ContainerStarted","Data":"088966789df6298b529d03622cc70497a74b5b5bf79e779cc769b88943f2084b"} Jan 26 15:07:49 crc kubenswrapper[4823]: I0126 15:07:49.113965 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.113938673 podStartE2EDuration="3.113938673s" podCreationTimestamp="2026-01-26 15:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:07:49.109562723 +0000 UTC m=+1265.795025828" watchObservedRunningTime="2026-01-26 15:07:49.113938673 +0000 UTC m=+1265.799401778" Jan 26 15:07:49 crc kubenswrapper[4823]: I0126 15:07:49.308275 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 15:07:49 crc kubenswrapper[4823]: I0126 15:07:49.386332 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:07:49 crc kubenswrapper[4823]: I0126 15:07:49.466253 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-6xjvx"] Jan 26 15:07:49 crc kubenswrapper[4823]: I0126 15:07:49.466729 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" podUID="5fff24cc-23b2-48e1-af92-218fefa1ff89" containerName="dnsmasq-dns" containerID="cri-o://b238185c5a2a4b6a9ca0e56e9e5a331c3d903fd0d076829bd1be4df28216bfeb" gracePeriod=10 Jan 26 15:07:49 crc kubenswrapper[4823]: I0126 15:07:49.821979 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.090068 4823 generic.go:334] "Generic (PLEG): container finished" podID="5fff24cc-23b2-48e1-af92-218fefa1ff89" containerID="b238185c5a2a4b6a9ca0e56e9e5a331c3d903fd0d076829bd1be4df28216bfeb" exitCode=0 Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.090131 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" event={"ID":"5fff24cc-23b2-48e1-af92-218fefa1ff89","Type":"ContainerDied","Data":"b238185c5a2a4b6a9ca0e56e9e5a331c3d903fd0d076829bd1be4df28216bfeb"} Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.090184 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" event={"ID":"5fff24cc-23b2-48e1-af92-218fefa1ff89","Type":"ContainerDied","Data":"d2154a68e2f45e463b7a77493e5ea4bd22c8ac1a459f9ec1a922e7461822610e"} Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.090196 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2154a68e2f45e463b7a77493e5ea4bd22c8ac1a459f9ec1a922e7461822610e" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.090517 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.127056 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.185231 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.270962 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-ovsdbserver-sb\") pod \"5fff24cc-23b2-48e1-af92-218fefa1ff89\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.271784 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4v8f\" (UniqueName: \"kubernetes.io/projected/5fff24cc-23b2-48e1-af92-218fefa1ff89-kube-api-access-t4v8f\") pod \"5fff24cc-23b2-48e1-af92-218fefa1ff89\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.271862 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-dns-svc\") pod \"5fff24cc-23b2-48e1-af92-218fefa1ff89\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.271904 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-ovsdbserver-nb\") pod \"5fff24cc-23b2-48e1-af92-218fefa1ff89\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.272017 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-config\") pod \"5fff24cc-23b2-48e1-af92-218fefa1ff89\" (UID: \"5fff24cc-23b2-48e1-af92-218fefa1ff89\") " Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.281754 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fff24cc-23b2-48e1-af92-218fefa1ff89-kube-api-access-t4v8f" (OuterVolumeSpecName: "kube-api-access-t4v8f") pod "5fff24cc-23b2-48e1-af92-218fefa1ff89" (UID: "5fff24cc-23b2-48e1-af92-218fefa1ff89"). InnerVolumeSpecName "kube-api-access-t4v8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.347480 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5fff24cc-23b2-48e1-af92-218fefa1ff89" (UID: "5fff24cc-23b2-48e1-af92-218fefa1ff89"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.356539 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-config" (OuterVolumeSpecName: "config") pod "5fff24cc-23b2-48e1-af92-218fefa1ff89" (UID: "5fff24cc-23b2-48e1-af92-218fefa1ff89"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.357265 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5fff24cc-23b2-48e1-af92-218fefa1ff89" (UID: "5fff24cc-23b2-48e1-af92-218fefa1ff89"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.377424 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.377999 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4v8f\" (UniqueName: \"kubernetes.io/projected/5fff24cc-23b2-48e1-af92-218fefa1ff89-kube-api-access-t4v8f\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.378149 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.378212 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.426516 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5fff24cc-23b2-48e1-af92-218fefa1ff89" (UID: "5fff24cc-23b2-48e1-af92-218fefa1ff89"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.427303 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.481294 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fff24cc-23b2-48e1-af92-218fefa1ff89-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:50 crc kubenswrapper[4823]: I0126 15:07:50.646167 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:07:51 crc kubenswrapper[4823]: I0126 15:07:51.104050 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-6xjvx" Jan 26 15:07:51 crc kubenswrapper[4823]: I0126 15:07:51.104092 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4963c458-b673-469c-83e8-96f38561d47c" containerName="cinder-scheduler" containerID="cri-o://8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8" gracePeriod=30 Jan 26 15:07:51 crc kubenswrapper[4823]: I0126 15:07:51.104679 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="4963c458-b673-469c-83e8-96f38561d47c" containerName="probe" containerID="cri-o://66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999" gracePeriod=30 Jan 26 15:07:51 crc kubenswrapper[4823]: I0126 15:07:51.148480 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-6xjvx"] Jan 26 15:07:51 crc kubenswrapper[4823]: I0126 15:07:51.156005 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-6xjvx"] Jan 26 15:07:51 crc kubenswrapper[4823]: I0126 15:07:51.574118 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fff24cc-23b2-48e1-af92-218fefa1ff89" path="/var/lib/kubelet/pods/5fff24cc-23b2-48e1-af92-218fefa1ff89/volumes" Jan 26 15:07:51 crc kubenswrapper[4823]: I0126 15:07:51.792970 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-75dbc957cb-ckfwc" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.142:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.142:8443: connect: connection refused" Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.118155 4823 generic.go:334] "Generic (PLEG): container finished" podID="4963c458-b673-469c-83e8-96f38561d47c" containerID="66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999" exitCode=0 Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.118455 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4963c458-b673-469c-83e8-96f38561d47c","Type":"ContainerDied","Data":"66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999"} Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.736083 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.837319 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-combined-ca-bundle\") pod \"4963c458-b673-469c-83e8-96f38561d47c\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.837503 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-config-data\") pod \"4963c458-b673-469c-83e8-96f38561d47c\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.837591 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-scripts\") pod \"4963c458-b673-469c-83e8-96f38561d47c\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.837704 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-config-data-custom\") pod \"4963c458-b673-469c-83e8-96f38561d47c\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.837755 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq58c\" (UniqueName: \"kubernetes.io/projected/4963c458-b673-469c-83e8-96f38561d47c-kube-api-access-hq58c\") pod \"4963c458-b673-469c-83e8-96f38561d47c\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.837809 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4963c458-b673-469c-83e8-96f38561d47c-etc-machine-id\") pod \"4963c458-b673-469c-83e8-96f38561d47c\" (UID: \"4963c458-b673-469c-83e8-96f38561d47c\") " Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.838150 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4963c458-b673-469c-83e8-96f38561d47c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4963c458-b673-469c-83e8-96f38561d47c" (UID: "4963c458-b673-469c-83e8-96f38561d47c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.838285 4823 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4963c458-b673-469c-83e8-96f38561d47c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.856871 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-scripts" (OuterVolumeSpecName: "scripts") pod "4963c458-b673-469c-83e8-96f38561d47c" (UID: "4963c458-b673-469c-83e8-96f38561d47c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.856936 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4963c458-b673-469c-83e8-96f38561d47c-kube-api-access-hq58c" (OuterVolumeSpecName: "kube-api-access-hq58c") pod "4963c458-b673-469c-83e8-96f38561d47c" (UID: "4963c458-b673-469c-83e8-96f38561d47c"). InnerVolumeSpecName "kube-api-access-hq58c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.867324 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "4963c458-b673-469c-83e8-96f38561d47c" (UID: "4963c458-b673-469c-83e8-96f38561d47c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.915557 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4963c458-b673-469c-83e8-96f38561d47c" (UID: "4963c458-b673-469c-83e8-96f38561d47c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.940792 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq58c\" (UniqueName: \"kubernetes.io/projected/4963c458-b673-469c-83e8-96f38561d47c-kube-api-access-hq58c\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.940832 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.940844 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.940854 4823 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:52 crc kubenswrapper[4823]: I0126 15:07:52.955511 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-config-data" (OuterVolumeSpecName: "config-data") pod "4963c458-b673-469c-83e8-96f38561d47c" (UID: "4963c458-b673-469c-83e8-96f38561d47c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.043120 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4963c458-b673-469c-83e8-96f38561d47c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.137144 4823 generic.go:334] "Generic (PLEG): container finished" podID="4963c458-b673-469c-83e8-96f38561d47c" containerID="8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8" exitCode=0 Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.137208 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4963c458-b673-469c-83e8-96f38561d47c","Type":"ContainerDied","Data":"8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8"} Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.137278 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"4963c458-b673-469c-83e8-96f38561d47c","Type":"ContainerDied","Data":"3f3f563eb393880e3db4181cbc13ffad849f38517574d83f1823cd13d118979e"} Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.137308 4823 scope.go:117] "RemoveContainer" containerID="66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.137481 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.182114 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.195964 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.207900 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:07:53 crc kubenswrapper[4823]: E0126 15:07:53.208483 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4963c458-b673-469c-83e8-96f38561d47c" containerName="probe" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.208509 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4963c458-b673-469c-83e8-96f38561d47c" containerName="probe" Jan 26 15:07:53 crc kubenswrapper[4823]: E0126 15:07:53.208536 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fff24cc-23b2-48e1-af92-218fefa1ff89" containerName="init" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.208544 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fff24cc-23b2-48e1-af92-218fefa1ff89" containerName="init" Jan 26 15:07:53 crc kubenswrapper[4823]: E0126 15:07:53.208555 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4963c458-b673-469c-83e8-96f38561d47c" containerName="cinder-scheduler" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.208563 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4963c458-b673-469c-83e8-96f38561d47c" containerName="cinder-scheduler" Jan 26 15:07:53 crc kubenswrapper[4823]: E0126 15:07:53.208586 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fff24cc-23b2-48e1-af92-218fefa1ff89" containerName="dnsmasq-dns" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.208593 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fff24cc-23b2-48e1-af92-218fefa1ff89" containerName="dnsmasq-dns" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.209661 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4963c458-b673-469c-83e8-96f38561d47c" containerName="cinder-scheduler" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.209694 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4963c458-b673-469c-83e8-96f38561d47c" containerName="probe" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.209711 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fff24cc-23b2-48e1-af92-218fefa1ff89" containerName="dnsmasq-dns" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.210888 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.220799 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.222092 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.246911 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2e373a7-6b26-47ee-9748-da6d2212c1fe-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.246998 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2e373a7-6b26-47ee-9748-da6d2212c1fe-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.247038 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qsg4\" (UniqueName: \"kubernetes.io/projected/f2e373a7-6b26-47ee-9748-da6d2212c1fe-kube-api-access-2qsg4\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.247061 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e373a7-6b26-47ee-9748-da6d2212c1fe-scripts\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.247083 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e373a7-6b26-47ee-9748-da6d2212c1fe-config-data\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.247104 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e373a7-6b26-47ee-9748-da6d2212c1fe-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.348845 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2e373a7-6b26-47ee-9748-da6d2212c1fe-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.348919 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qsg4\" (UniqueName: \"kubernetes.io/projected/f2e373a7-6b26-47ee-9748-da6d2212c1fe-kube-api-access-2qsg4\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.348946 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e373a7-6b26-47ee-9748-da6d2212c1fe-scripts\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.348965 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e373a7-6b26-47ee-9748-da6d2212c1fe-config-data\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.348982 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/f2e373a7-6b26-47ee-9748-da6d2212c1fe-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.348993 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e373a7-6b26-47ee-9748-da6d2212c1fe-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.350504 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2e373a7-6b26-47ee-9748-da6d2212c1fe-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.381236 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2e373a7-6b26-47ee-9748-da6d2212c1fe-scripts\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.381820 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e373a7-6b26-47ee-9748-da6d2212c1fe-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.382510 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e373a7-6b26-47ee-9748-da6d2212c1fe-config-data\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.428216 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2e373a7-6b26-47ee-9748-da6d2212c1fe-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.471943 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qsg4\" (UniqueName: \"kubernetes.io/projected/f2e373a7-6b26-47ee-9748-da6d2212c1fe-kube-api-access-2qsg4\") pod \"cinder-scheduler-0\" (UID: \"f2e373a7-6b26-47ee-9748-da6d2212c1fe\") " pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.538989 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 26 15:07:53 crc kubenswrapper[4823]: I0126 15:07:53.576326 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4963c458-b673-469c-83e8-96f38561d47c" path="/var/lib/kubelet/pods/4963c458-b673-469c-83e8-96f38561d47c/volumes" Jan 26 15:07:56 crc kubenswrapper[4823]: E0126 15:07:56.044213 4823 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: , extraDiskErr: could not stat "/var/log/pods/openstack_cinder-scheduler-0_4963c458-b673-469c-83e8-96f38561d47c/probe/0.log" to get inode usage: stat /var/log/pods/openstack_cinder-scheduler-0_4963c458-b673-469c-83e8-96f38561d47c/probe/0.log: no such file or directory Jan 26 15:07:56 crc kubenswrapper[4823]: I0126 15:07:56.820569 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:56 crc kubenswrapper[4823]: I0126 15:07:56.909133 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.188112 4823 generic.go:334] "Generic (PLEG): container finished" podID="4f681696-41f2-470d-805c-5b70ea803542" containerID="d81b813c0f64e4fbf4e118c4fde902c88138d105ee75288b1a27977e3f090c94" exitCode=137 Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.188221 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dbc957cb-ckfwc" event={"ID":"4f681696-41f2-470d-805c-5b70ea803542","Type":"ContainerDied","Data":"d81b813c0f64e4fbf4e118c4fde902c88138d105ee75288b1a27977e3f090c94"} Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.267592 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-67c696b96b-69j89" Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.334862 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-558766bffd-sp28z"] Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.335129 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-558766bffd-sp28z" podUID="9c481587-9ea3-4191-b140-728f6e314195" containerName="barbican-api-log" containerID="cri-o://df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176" gracePeriod=30 Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.335816 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-558766bffd-sp28z" podUID="9c481587-9ea3-4191-b140-728f6e314195" containerName="barbican-api" containerID="cri-o://348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252" gracePeriod=30 Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.629053 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7c575987db-2cpjc" Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.681511 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.681884 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="ceilometer-central-agent" containerID="cri-o://80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f" gracePeriod=30 Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.682893 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="proxy-httpd" containerID="cri-o://c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697" gracePeriod=30 Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.682966 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="sg-core" containerID="cri-o://59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605" gracePeriod=30 Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.683047 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="ceilometer-notification-agent" containerID="cri-o://73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297" gracePeriod=30 Jan 26 15:07:57 crc kubenswrapper[4823]: I0126 15:07:57.712819 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 26 15:07:58 crc kubenswrapper[4823]: I0126 15:07:58.224476 4823 generic.go:334] "Generic (PLEG): container finished" podID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerID="c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697" exitCode=0 Jan 26 15:07:58 crc kubenswrapper[4823]: I0126 15:07:58.224532 4823 generic.go:334] "Generic (PLEG): container finished" podID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerID="59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605" exitCode=2 Jan 26 15:07:58 crc kubenswrapper[4823]: I0126 15:07:58.224544 4823 generic.go:334] "Generic (PLEG): container finished" podID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerID="80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f" exitCode=0 Jan 26 15:07:58 crc kubenswrapper[4823]: I0126 15:07:58.224539 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc957d6-b6e5-4fad-91cb-e78f450611c9","Type":"ContainerDied","Data":"c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697"} Jan 26 15:07:58 crc kubenswrapper[4823]: I0126 15:07:58.224610 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc957d6-b6e5-4fad-91cb-e78f450611c9","Type":"ContainerDied","Data":"59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605"} Jan 26 15:07:58 crc kubenswrapper[4823]: I0126 15:07:58.224621 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc957d6-b6e5-4fad-91cb-e78f450611c9","Type":"ContainerDied","Data":"80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f"} Jan 26 15:07:58 crc kubenswrapper[4823]: I0126 15:07:58.230242 4823 generic.go:334] "Generic (PLEG): container finished" podID="9c481587-9ea3-4191-b140-728f6e314195" containerID="df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176" exitCode=143 Jan 26 15:07:58 crc kubenswrapper[4823]: I0126 15:07:58.230282 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-558766bffd-sp28z" event={"ID":"9c481587-9ea3-4191-b140-728f6e314195","Type":"ContainerDied","Data":"df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176"} Jan 26 15:07:59 crc kubenswrapper[4823]: I0126 15:07:59.352466 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.303169 4823 scope.go:117] "RemoveContainer" containerID="8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.460488 4823 scope.go:117] "RemoveContainer" containerID="66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999" Jan 26 15:08:00 crc kubenswrapper[4823]: E0126 15:08:00.464620 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999\": container with ID starting with 66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999 not found: ID does not exist" containerID="66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.464678 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999"} err="failed to get container status \"66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999\": rpc error: code = NotFound desc = could not find container \"66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999\": container with ID starting with 66ac3336b797f44d5cee312dda36030d439d5f739f56fff3c0662f07d3664999 not found: ID does not exist" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.464714 4823 scope.go:117] "RemoveContainer" containerID="8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8" Jan 26 15:08:00 crc kubenswrapper[4823]: E0126 15:08:00.472534 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8\": container with ID starting with 8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8 not found: ID does not exist" containerID="8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.472598 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8"} err="failed to get container status \"8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8\": rpc error: code = NotFound desc = could not find container \"8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8\": container with ID starting with 8d2945cd1873082cf24e73061c55d05e8e036f8fdeb779c8a846914d16c13ee8 not found: ID does not exist" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.787475 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.834992 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-558766bffd-sp28z" podUID="9c481587-9ea3-4191-b140-728f6e314195" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:51494->10.217.0.154:9311: read: connection reset by peer" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.835522 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-558766bffd-sp28z" podUID="9c481587-9ea3-4191-b140-728f6e314195" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.154:9311/healthcheck\": read tcp 10.217.0.2:51508->10.217.0.154:9311: read: connection reset by peer" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.912333 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czmfm\" (UniqueName: \"kubernetes.io/projected/4f681696-41f2-470d-805c-5b70ea803542-kube-api-access-czmfm\") pod \"4f681696-41f2-470d-805c-5b70ea803542\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.912559 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f681696-41f2-470d-805c-5b70ea803542-logs\") pod \"4f681696-41f2-470d-805c-5b70ea803542\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.913475 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f681696-41f2-470d-805c-5b70ea803542-logs" (OuterVolumeSpecName: "logs") pod "4f681696-41f2-470d-805c-5b70ea803542" (UID: "4f681696-41f2-470d-805c-5b70ea803542"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.913583 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-horizon-tls-certs\") pod \"4f681696-41f2-470d-805c-5b70ea803542\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.913662 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-combined-ca-bundle\") pod \"4f681696-41f2-470d-805c-5b70ea803542\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.914168 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-horizon-secret-key\") pod \"4f681696-41f2-470d-805c-5b70ea803542\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.914249 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f681696-41f2-470d-805c-5b70ea803542-scripts\") pod \"4f681696-41f2-470d-805c-5b70ea803542\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.914355 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f681696-41f2-470d-805c-5b70ea803542-config-data\") pod \"4f681696-41f2-470d-805c-5b70ea803542\" (UID: \"4f681696-41f2-470d-805c-5b70ea803542\") " Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.914953 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f681696-41f2-470d-805c-5b70ea803542-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.919397 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f681696-41f2-470d-805c-5b70ea803542-kube-api-access-czmfm" (OuterVolumeSpecName: "kube-api-access-czmfm") pod "4f681696-41f2-470d-805c-5b70ea803542" (UID: "4f681696-41f2-470d-805c-5b70ea803542"). InnerVolumeSpecName "kube-api-access-czmfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.919709 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "4f681696-41f2-470d-805c-5b70ea803542" (UID: "4f681696-41f2-470d-805c-5b70ea803542"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.955568 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f681696-41f2-470d-805c-5b70ea803542-config-data" (OuterVolumeSpecName: "config-data") pod "4f681696-41f2-470d-805c-5b70ea803542" (UID: "4f681696-41f2-470d-805c-5b70ea803542"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.959667 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f681696-41f2-470d-805c-5b70ea803542" (UID: "4f681696-41f2-470d-805c-5b70ea803542"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.960778 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f681696-41f2-470d-805c-5b70ea803542-scripts" (OuterVolumeSpecName: "scripts") pod "4f681696-41f2-470d-805c-5b70ea803542" (UID: "4f681696-41f2-470d-805c-5b70ea803542"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.962199 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "4f681696-41f2-470d-805c-5b70ea803542" (UID: "4f681696-41f2-470d-805c-5b70ea803542"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:00 crc kubenswrapper[4823]: I0126 15:08:00.986705 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.016244 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.016275 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.016286 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f681696-41f2-470d-805c-5b70ea803542-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.016294 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4f681696-41f2-470d-805c-5b70ea803542-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.016305 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czmfm\" (UniqueName: \"kubernetes.io/projected/4f681696-41f2-470d-805c-5b70ea803542-kube-api-access-czmfm\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.016315 4823 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f681696-41f2-470d-805c-5b70ea803542-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.201311 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.280460 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75dbc957cb-ckfwc" event={"ID":"4f681696-41f2-470d-805c-5b70ea803542","Type":"ContainerDied","Data":"eec9d08fba070fb300eb8d5a35356cbd3fd818901511c244c830b8f3c1e3d9fa"} Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.280535 4823 scope.go:117] "RemoveContainer" containerID="ae177c04f8f1dcb05bd9666d753f2e2bd9fda6779e26a6637805474c893cb5fe" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.280656 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75dbc957cb-ckfwc" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.299331 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"63a706b5-54fa-4b1a-a755-04a5a7a52973","Type":"ContainerStarted","Data":"059795c0f3dac6440b1a9414e26ca4c1a566db3fee1b98da1410579f011dcb14"} Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.307246 4823 generic.go:334] "Generic (PLEG): container finished" podID="9c481587-9ea3-4191-b140-728f6e314195" containerID="348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252" exitCode=0 Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.307300 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-558766bffd-sp28z" event={"ID":"9c481587-9ea3-4191-b140-728f6e314195","Type":"ContainerDied","Data":"348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252"} Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.307285 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-558766bffd-sp28z" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.307416 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-558766bffd-sp28z" event={"ID":"9c481587-9ea3-4191-b140-728f6e314195","Type":"ContainerDied","Data":"66976e5b725d647c09e277a0e5452e489abe24c758d0f441db52bc3958407849"} Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.321462 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c481587-9ea3-4191-b140-728f6e314195-logs\") pod \"9c481587-9ea3-4191-b140-728f6e314195\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.321589 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-combined-ca-bundle\") pod \"9c481587-9ea3-4191-b140-728f6e314195\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.321725 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nvgw\" (UniqueName: \"kubernetes.io/projected/9c481587-9ea3-4191-b140-728f6e314195-kube-api-access-2nvgw\") pod \"9c481587-9ea3-4191-b140-728f6e314195\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.321847 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-config-data\") pod \"9c481587-9ea3-4191-b140-728f6e314195\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.321908 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-config-data-custom\") pod \"9c481587-9ea3-4191-b140-728f6e314195\" (UID: \"9c481587-9ea3-4191-b140-728f6e314195\") " Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.326554 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c481587-9ea3-4191-b140-728f6e314195-logs" (OuterVolumeSpecName: "logs") pod "9c481587-9ea3-4191-b140-728f6e314195" (UID: "9c481587-9ea3-4191-b140-728f6e314195"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.327501 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9c481587-9ea3-4191-b140-728f6e314195" (UID: "9c481587-9ea3-4191-b140-728f6e314195"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.328654 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2e373a7-6b26-47ee-9748-da6d2212c1fe","Type":"ContainerStarted","Data":"e2ff955b033d8c4667c353f346d929a825ace34935c6aa253e5d14dacbd1042a"} Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.330311 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c481587-9ea3-4191-b140-728f6e314195-kube-api-access-2nvgw" (OuterVolumeSpecName: "kube-api-access-2nvgw") pod "9c481587-9ea3-4191-b140-728f6e314195" (UID: "9c481587-9ea3-4191-b140-728f6e314195"). InnerVolumeSpecName "kube-api-access-2nvgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.369672 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c481587-9ea3-4191-b140-728f6e314195" (UID: "9c481587-9ea3-4191-b140-728f6e314195"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.391182 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-config-data" (OuterVolumeSpecName: "config-data") pod "9c481587-9ea3-4191-b140-728f6e314195" (UID: "9c481587-9ea3-4191-b140-728f6e314195"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.425142 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.425192 4823 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.425205 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c481587-9ea3-4191-b140-728f6e314195-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.425216 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c481587-9ea3-4191-b140-728f6e314195-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.425229 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nvgw\" (UniqueName: \"kubernetes.io/projected/9c481587-9ea3-4191-b140-728f6e314195-kube-api-access-2nvgw\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.465124 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.232400404 podStartE2EDuration="16.465097395s" podCreationTimestamp="2026-01-26 15:07:45 +0000 UTC" firstStartedPulling="2026-01-26 15:07:46.174813927 +0000 UTC m=+1262.860277042" lastFinishedPulling="2026-01-26 15:08:00.407510928 +0000 UTC m=+1277.092974033" observedRunningTime="2026-01-26 15:08:01.317943584 +0000 UTC m=+1278.003406689" watchObservedRunningTime="2026-01-26 15:08:01.465097395 +0000 UTC m=+1278.150560500" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.467954 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75dbc957cb-ckfwc"] Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.480781 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75dbc957cb-ckfwc"] Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.573205 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f681696-41f2-470d-805c-5b70ea803542" path="/var/lib/kubelet/pods/4f681696-41f2-470d-805c-5b70ea803542/volumes" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.603148 4823 scope.go:117] "RemoveContainer" containerID="d81b813c0f64e4fbf4e118c4fde902c88138d105ee75288b1a27977e3f090c94" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.633049 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-558766bffd-sp28z"] Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.644891 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-558766bffd-sp28z"] Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.654378 4823 scope.go:117] "RemoveContainer" containerID="348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.685445 4823 scope.go:117] "RemoveContainer" containerID="df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.720946 4823 scope.go:117] "RemoveContainer" containerID="348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252" Jan 26 15:08:01 crc kubenswrapper[4823]: E0126 15:08:01.721497 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252\": container with ID starting with 348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252 not found: ID does not exist" containerID="348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.721534 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252"} err="failed to get container status \"348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252\": rpc error: code = NotFound desc = could not find container \"348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252\": container with ID starting with 348c6b8f3aed54e89e9895a2dafdddd9eaf3fa8d470b394d1661f7e7372e7252 not found: ID does not exist" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.721558 4823 scope.go:117] "RemoveContainer" containerID="df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176" Jan 26 15:08:01 crc kubenswrapper[4823]: E0126 15:08:01.722068 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176\": container with ID starting with df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176 not found: ID does not exist" containerID="df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176" Jan 26 15:08:01 crc kubenswrapper[4823]: I0126 15:08:01.722113 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176"} err="failed to get container status \"df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176\": rpc error: code = NotFound desc = could not find container \"df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176\": container with ID starting with df8eeefa18d84e268d356b4272f5ae7287c3bf9536b1aa9742cbcb8d8d174176 not found: ID does not exist" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.426735 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.475616 4823 generic.go:334] "Generic (PLEG): container finished" podID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerID="73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297" exitCode=0 Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.475787 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc957d6-b6e5-4fad-91cb-e78f450611c9","Type":"ContainerDied","Data":"73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297"} Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.475823 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2fc957d6-b6e5-4fad-91cb-e78f450611c9","Type":"ContainerDied","Data":"927d31f868c9e6f5dfb1aa82516a9eb78e0882da0cb447d8d64c6e5d7137c74e"} Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.475865 4823 scope.go:117] "RemoveContainer" containerID="c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.476060 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.550464 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2e373a7-6b26-47ee-9748-da6d2212c1fe","Type":"ContainerStarted","Data":"db8eb580f0829ecd55452e239825a8a8c4cc45be4056996c27b4520ffb4fb869"} Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.564570 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz447\" (UniqueName: \"kubernetes.io/projected/2fc957d6-b6e5-4fad-91cb-e78f450611c9-kube-api-access-fz447\") pod \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.564743 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc957d6-b6e5-4fad-91cb-e78f450611c9-run-httpd\") pod \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.564822 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc957d6-b6e5-4fad-91cb-e78f450611c9-log-httpd\") pod \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.564916 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-scripts\") pod \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.564982 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-config-data\") pod \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.565117 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-combined-ca-bundle\") pod \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.565160 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-sg-core-conf-yaml\") pod \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\" (UID: \"2fc957d6-b6e5-4fad-91cb-e78f450611c9\") " Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.570347 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc957d6-b6e5-4fad-91cb-e78f450611c9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2fc957d6-b6e5-4fad-91cb-e78f450611c9" (UID: "2fc957d6-b6e5-4fad-91cb-e78f450611c9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.574811 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fc957d6-b6e5-4fad-91cb-e78f450611c9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2fc957d6-b6e5-4fad-91cb-e78f450611c9" (UID: "2fc957d6-b6e5-4fad-91cb-e78f450611c9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.575639 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc957d6-b6e5-4fad-91cb-e78f450611c9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.575661 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2fc957d6-b6e5-4fad-91cb-e78f450611c9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.581335 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-scripts" (OuterVolumeSpecName: "scripts") pod "2fc957d6-b6e5-4fad-91cb-e78f450611c9" (UID: "2fc957d6-b6e5-4fad-91cb-e78f450611c9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.594900 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc957d6-b6e5-4fad-91cb-e78f450611c9-kube-api-access-fz447" (OuterVolumeSpecName: "kube-api-access-fz447") pod "2fc957d6-b6e5-4fad-91cb-e78f450611c9" (UID: "2fc957d6-b6e5-4fad-91cb-e78f450611c9"). InnerVolumeSpecName "kube-api-access-fz447". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.615846 4823 scope.go:117] "RemoveContainer" containerID="59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.633452 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2fc957d6-b6e5-4fad-91cb-e78f450611c9" (UID: "2fc957d6-b6e5-4fad-91cb-e78f450611c9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.677685 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.677728 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.677738 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz447\" (UniqueName: \"kubernetes.io/projected/2fc957d6-b6e5-4fad-91cb-e78f450611c9-kube-api-access-fz447\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.730724 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fc957d6-b6e5-4fad-91cb-e78f450611c9" (UID: "2fc957d6-b6e5-4fad-91cb-e78f450611c9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.757176 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-config-data" (OuterVolumeSpecName: "config-data") pod "2fc957d6-b6e5-4fad-91cb-e78f450611c9" (UID: "2fc957d6-b6e5-4fad-91cb-e78f450611c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.774411 4823 scope.go:117] "RemoveContainer" containerID="73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.779220 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.779254 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc957d6-b6e5-4fad-91cb-e78f450611c9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.804661 4823 scope.go:117] "RemoveContainer" containerID="80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.835336 4823 scope.go:117] "RemoveContainer" containerID="c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.836588 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:02 crc kubenswrapper[4823]: E0126 15:08:02.847512 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697\": container with ID starting with c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697 not found: ID does not exist" containerID="c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.847810 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697"} err="failed to get container status \"c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697\": rpc error: code = NotFound desc = could not find container \"c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697\": container with ID starting with c1c0e939851bdc97c40b22af9a2f6868ff7af2a881279a9f2759ae4eb961d697 not found: ID does not exist" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.847905 4823 scope.go:117] "RemoveContainer" containerID="59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.857332 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:02 crc kubenswrapper[4823]: E0126 15:08:02.857545 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605\": container with ID starting with 59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605 not found: ID does not exist" containerID="59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.857580 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605"} err="failed to get container status \"59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605\": rpc error: code = NotFound desc = could not find container \"59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605\": container with ID starting with 59d2e1d9933cd262de6295f7007cdbd97345dc20cdcc7fede9013341519df605 not found: ID does not exist" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.857617 4823 scope.go:117] "RemoveContainer" containerID="73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297" Jan 26 15:08:02 crc kubenswrapper[4823]: E0126 15:08:02.863860 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297\": container with ID starting with 73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297 not found: ID does not exist" containerID="73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.863915 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297"} err="failed to get container status \"73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297\": rpc error: code = NotFound desc = could not find container \"73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297\": container with ID starting with 73476b01d547f25f927d6a4ec1aa49f5ae3feb20d3fc4b7533ac36b7ec0c3297 not found: ID does not exist" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.863971 4823 scope.go:117] "RemoveContainer" containerID="80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f" Jan 26 15:08:02 crc kubenswrapper[4823]: E0126 15:08:02.864754 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f\": container with ID starting with 80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f not found: ID does not exist" containerID="80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.864891 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f"} err="failed to get container status \"80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f\": rpc error: code = NotFound desc = could not find container \"80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f\": container with ID starting with 80f7c5660cae53d7eb4aadfca33383caf57fc7ffb0ad1e147074f3047e02880f not found: ID does not exist" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.870048 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:02 crc kubenswrapper[4823]: E0126 15:08:02.871741 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="sg-core" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.871760 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="sg-core" Jan 26 15:08:02 crc kubenswrapper[4823]: E0126 15:08:02.871776 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c481587-9ea3-4191-b140-728f6e314195" containerName="barbican-api" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.871783 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c481587-9ea3-4191-b140-728f6e314195" containerName="barbican-api" Jan 26 15:08:02 crc kubenswrapper[4823]: E0126 15:08:02.871807 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon-log" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.871815 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon-log" Jan 26 15:08:02 crc kubenswrapper[4823]: E0126 15:08:02.871836 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="ceilometer-notification-agent" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.871843 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="ceilometer-notification-agent" Jan 26 15:08:02 crc kubenswrapper[4823]: E0126 15:08:02.871855 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="proxy-httpd" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.871861 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="proxy-httpd" Jan 26 15:08:02 crc kubenswrapper[4823]: E0126 15:08:02.871890 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="ceilometer-central-agent" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.871898 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="ceilometer-central-agent" Jan 26 15:08:02 crc kubenswrapper[4823]: E0126 15:08:02.871914 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.871921 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon" Jan 26 15:08:02 crc kubenswrapper[4823]: E0126 15:08:02.871939 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c481587-9ea3-4191-b140-728f6e314195" containerName="barbican-api-log" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.871945 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c481587-9ea3-4191-b140-728f6e314195" containerName="barbican-api-log" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.882675 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.882734 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="ceilometer-central-agent" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.882764 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f681696-41f2-470d-805c-5b70ea803542" containerName="horizon-log" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.882804 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c481587-9ea3-4191-b140-728f6e314195" containerName="barbican-api-log" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.882830 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="sg-core" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.882850 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c481587-9ea3-4191-b140-728f6e314195" containerName="barbican-api" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.882862 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="ceilometer-notification-agent" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.882885 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" containerName="proxy-httpd" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.890078 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.890233 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.898625 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.899211 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.984955 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgfd5\" (UniqueName: \"kubernetes.io/projected/1433dd20-acb3-442a-a720-81d9c7a7251f-kube-api-access-kgfd5\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.985039 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.985061 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.985097 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-scripts\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.985120 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-config-data\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.985180 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1433dd20-acb3-442a-a720-81d9c7a7251f-log-httpd\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:02 crc kubenswrapper[4823]: I0126 15:08:02.985222 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1433dd20-acb3-442a-a720-81d9c7a7251f-run-httpd\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.086939 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.087007 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.087057 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-scripts\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.087092 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-config-data\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.087162 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1433dd20-acb3-442a-a720-81d9c7a7251f-log-httpd\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.087206 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1433dd20-acb3-442a-a720-81d9c7a7251f-run-httpd\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.087258 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgfd5\" (UniqueName: \"kubernetes.io/projected/1433dd20-acb3-442a-a720-81d9c7a7251f-kube-api-access-kgfd5\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.088030 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1433dd20-acb3-442a-a720-81d9c7a7251f-run-httpd\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.088511 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1433dd20-acb3-442a-a720-81d9c7a7251f-log-httpd\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.092890 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.093095 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-scripts\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.094244 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.095799 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-config-data\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.109068 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgfd5\" (UniqueName: \"kubernetes.io/projected/1433dd20-acb3-442a-a720-81d9c7a7251f-kube-api-access-kgfd5\") pod \"ceilometer-0\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.247794 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.594196 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc957d6-b6e5-4fad-91cb-e78f450611c9" path="/var/lib/kubelet/pods/2fc957d6-b6e5-4fad-91cb-e78f450611c9/volumes" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.600254 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c481587-9ea3-4191-b140-728f6e314195" path="/var/lib/kubelet/pods/9c481587-9ea3-4191-b140-728f6e314195/volumes" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.601796 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"f2e373a7-6b26-47ee-9748-da6d2212c1fe","Type":"ContainerStarted","Data":"59af9bc3b8b9474179d7d04c58c7be398e7c1742fce49ae97b85b9768e6caf1d"} Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.640264 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=10.640235366 podStartE2EDuration="10.640235366s" podCreationTimestamp="2026-01-26 15:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:08:03.629602616 +0000 UTC m=+1280.315065721" watchObservedRunningTime="2026-01-26 15:08:03.640235366 +0000 UTC m=+1280.325698461" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.721781 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-97whl"] Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.723199 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-97whl" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.743961 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-97whl"] Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.766664 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.834874 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-7w4md"] Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.836257 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7w4md" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.855427 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-cb5a-account-create-update-27lns"] Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.856924 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-cb5a-account-create-update-27lns" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.860843 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.879693 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7w4md"] Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.895966 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-cb5a-account-create-update-27lns"] Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.903707 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77121317-dc3e-497c-878a-b3077fef4920-operator-scripts\") pod \"nova-api-db-create-97whl\" (UID: \"77121317-dc3e-497c-878a-b3077fef4920\") " pod="openstack/nova-api-db-create-97whl" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.903869 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjxbv\" (UniqueName: \"kubernetes.io/projected/77121317-dc3e-497c-878a-b3077fef4920-kube-api-access-qjxbv\") pod \"nova-api-db-create-97whl\" (UID: \"77121317-dc3e-497c-878a-b3077fef4920\") " pod="openstack/nova-api-db-create-97whl" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.937873 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-9jh8h"] Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.939146 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9jh8h" Jan 26 15:08:03 crc kubenswrapper[4823]: I0126 15:08:03.950125 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9jh8h"] Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.007249 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx5pc\" (UniqueName: \"kubernetes.io/projected/7610ca35-36e4-45dd-b20d-e0ea80b3f62d-kube-api-access-gx5pc\") pod \"nova-cell0-db-create-7w4md\" (UID: \"7610ca35-36e4-45dd-b20d-e0ea80b3f62d\") " pod="openstack/nova-cell0-db-create-7w4md" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.007609 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjxbv\" (UniqueName: \"kubernetes.io/projected/77121317-dc3e-497c-878a-b3077fef4920-kube-api-access-qjxbv\") pod \"nova-api-db-create-97whl\" (UID: \"77121317-dc3e-497c-878a-b3077fef4920\") " pod="openstack/nova-api-db-create-97whl" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.007763 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7610ca35-36e4-45dd-b20d-e0ea80b3f62d-operator-scripts\") pod \"nova-cell0-db-create-7w4md\" (UID: \"7610ca35-36e4-45dd-b20d-e0ea80b3f62d\") " pod="openstack/nova-cell0-db-create-7w4md" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.007928 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77121317-dc3e-497c-878a-b3077fef4920-operator-scripts\") pod \"nova-api-db-create-97whl\" (UID: \"77121317-dc3e-497c-878a-b3077fef4920\") " pod="openstack/nova-api-db-create-97whl" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.008073 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c430bc54-e863-4d5d-bb23-0f54084f28a0-operator-scripts\") pod \"nova-api-cb5a-account-create-update-27lns\" (UID: \"c430bc54-e863-4d5d-bb23-0f54084f28a0\") " pod="openstack/nova-api-cb5a-account-create-update-27lns" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.008222 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f5nl\" (UniqueName: \"kubernetes.io/projected/c430bc54-e863-4d5d-bb23-0f54084f28a0-kube-api-access-2f5nl\") pod \"nova-api-cb5a-account-create-update-27lns\" (UID: \"c430bc54-e863-4d5d-bb23-0f54084f28a0\") " pod="openstack/nova-api-cb5a-account-create-update-27lns" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.008833 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77121317-dc3e-497c-878a-b3077fef4920-operator-scripts\") pod \"nova-api-db-create-97whl\" (UID: \"77121317-dc3e-497c-878a-b3077fef4920\") " pod="openstack/nova-api-db-create-97whl" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.038986 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2831-account-create-update-f6wxs"] Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.042562 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2831-account-create-update-f6wxs" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.046746 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.054813 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2831-account-create-update-f6wxs"] Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.069043 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjxbv\" (UniqueName: \"kubernetes.io/projected/77121317-dc3e-497c-878a-b3077fef4920-kube-api-access-qjxbv\") pod \"nova-api-db-create-97whl\" (UID: \"77121317-dc3e-497c-878a-b3077fef4920\") " pod="openstack/nova-api-db-create-97whl" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.099318 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-97whl" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.110728 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx5pc\" (UniqueName: \"kubernetes.io/projected/7610ca35-36e4-45dd-b20d-e0ea80b3f62d-kube-api-access-gx5pc\") pod \"nova-cell0-db-create-7w4md\" (UID: \"7610ca35-36e4-45dd-b20d-e0ea80b3f62d\") " pod="openstack/nova-cell0-db-create-7w4md" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.110806 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7610ca35-36e4-45dd-b20d-e0ea80b3f62d-operator-scripts\") pod \"nova-cell0-db-create-7w4md\" (UID: \"7610ca35-36e4-45dd-b20d-e0ea80b3f62d\") " pod="openstack/nova-cell0-db-create-7w4md" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.110863 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c430bc54-e863-4d5d-bb23-0f54084f28a0-operator-scripts\") pod \"nova-api-cb5a-account-create-update-27lns\" (UID: \"c430bc54-e863-4d5d-bb23-0f54084f28a0\") " pod="openstack/nova-api-cb5a-account-create-update-27lns" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.110893 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f5nl\" (UniqueName: \"kubernetes.io/projected/c430bc54-e863-4d5d-bb23-0f54084f28a0-kube-api-access-2f5nl\") pod \"nova-api-cb5a-account-create-update-27lns\" (UID: \"c430bc54-e863-4d5d-bb23-0f54084f28a0\") " pod="openstack/nova-api-cb5a-account-create-update-27lns" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.110927 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4119b40-4749-455b-9bba-68fdf24554a0-operator-scripts\") pod \"nova-cell1-db-create-9jh8h\" (UID: \"c4119b40-4749-455b-9bba-68fdf24554a0\") " pod="openstack/nova-cell1-db-create-9jh8h" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.110983 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8n2g\" (UniqueName: \"kubernetes.io/projected/c4119b40-4749-455b-9bba-68fdf24554a0-kube-api-access-w8n2g\") pod \"nova-cell1-db-create-9jh8h\" (UID: \"c4119b40-4749-455b-9bba-68fdf24554a0\") " pod="openstack/nova-cell1-db-create-9jh8h" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.112036 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c430bc54-e863-4d5d-bb23-0f54084f28a0-operator-scripts\") pod \"nova-api-cb5a-account-create-update-27lns\" (UID: \"c430bc54-e863-4d5d-bb23-0f54084f28a0\") " pod="openstack/nova-api-cb5a-account-create-update-27lns" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.112070 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7610ca35-36e4-45dd-b20d-e0ea80b3f62d-operator-scripts\") pod \"nova-cell0-db-create-7w4md\" (UID: \"7610ca35-36e4-45dd-b20d-e0ea80b3f62d\") " pod="openstack/nova-cell0-db-create-7w4md" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.130337 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx5pc\" (UniqueName: \"kubernetes.io/projected/7610ca35-36e4-45dd-b20d-e0ea80b3f62d-kube-api-access-gx5pc\") pod \"nova-cell0-db-create-7w4md\" (UID: \"7610ca35-36e4-45dd-b20d-e0ea80b3f62d\") " pod="openstack/nova-cell0-db-create-7w4md" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.131285 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f5nl\" (UniqueName: \"kubernetes.io/projected/c430bc54-e863-4d5d-bb23-0f54084f28a0-kube-api-access-2f5nl\") pod \"nova-api-cb5a-account-create-update-27lns\" (UID: \"c430bc54-e863-4d5d-bb23-0f54084f28a0\") " pod="openstack/nova-api-cb5a-account-create-update-27lns" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.154950 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7w4md" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.179499 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-cb5a-account-create-update-27lns" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.213616 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8n2g\" (UniqueName: \"kubernetes.io/projected/c4119b40-4749-455b-9bba-68fdf24554a0-kube-api-access-w8n2g\") pod \"nova-cell1-db-create-9jh8h\" (UID: \"c4119b40-4749-455b-9bba-68fdf24554a0\") " pod="openstack/nova-cell1-db-create-9jh8h" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.214580 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkdsm\" (UniqueName: \"kubernetes.io/projected/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f-kube-api-access-jkdsm\") pod \"nova-cell0-2831-account-create-update-f6wxs\" (UID: \"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f\") " pod="openstack/nova-cell0-2831-account-create-update-f6wxs" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.214672 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4119b40-4749-455b-9bba-68fdf24554a0-operator-scripts\") pod \"nova-cell1-db-create-9jh8h\" (UID: \"c4119b40-4749-455b-9bba-68fdf24554a0\") " pod="openstack/nova-cell1-db-create-9jh8h" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.215553 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f-operator-scripts\") pod \"nova-cell0-2831-account-create-update-f6wxs\" (UID: \"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f\") " pod="openstack/nova-cell0-2831-account-create-update-f6wxs" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.216688 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4119b40-4749-455b-9bba-68fdf24554a0-operator-scripts\") pod \"nova-cell1-db-create-9jh8h\" (UID: \"c4119b40-4749-455b-9bba-68fdf24554a0\") " pod="openstack/nova-cell1-db-create-9jh8h" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.231766 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-da95-account-create-update-ctl6c"] Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.233003 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da95-account-create-update-ctl6c" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.236251 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8n2g\" (UniqueName: \"kubernetes.io/projected/c4119b40-4749-455b-9bba-68fdf24554a0-kube-api-access-w8n2g\") pod \"nova-cell1-db-create-9jh8h\" (UID: \"c4119b40-4749-455b-9bba-68fdf24554a0\") " pod="openstack/nova-cell1-db-create-9jh8h" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.241274 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.252135 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-da95-account-create-update-ctl6c"] Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.305876 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9jh8h" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.318591 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2r24\" (UniqueName: \"kubernetes.io/projected/c97a7ddc-c557-4d5c-80d7-60fd099d192d-kube-api-access-n2r24\") pod \"nova-cell1-da95-account-create-update-ctl6c\" (UID: \"c97a7ddc-c557-4d5c-80d7-60fd099d192d\") " pod="openstack/nova-cell1-da95-account-create-update-ctl6c" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.318654 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c97a7ddc-c557-4d5c-80d7-60fd099d192d-operator-scripts\") pod \"nova-cell1-da95-account-create-update-ctl6c\" (UID: \"c97a7ddc-c557-4d5c-80d7-60fd099d192d\") " pod="openstack/nova-cell1-da95-account-create-update-ctl6c" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.318707 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkdsm\" (UniqueName: \"kubernetes.io/projected/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f-kube-api-access-jkdsm\") pod \"nova-cell0-2831-account-create-update-f6wxs\" (UID: \"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f\") " pod="openstack/nova-cell0-2831-account-create-update-f6wxs" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.318784 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f-operator-scripts\") pod \"nova-cell0-2831-account-create-update-f6wxs\" (UID: \"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f\") " pod="openstack/nova-cell0-2831-account-create-update-f6wxs" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.319600 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f-operator-scripts\") pod \"nova-cell0-2831-account-create-update-f6wxs\" (UID: \"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f\") " pod="openstack/nova-cell0-2831-account-create-update-f6wxs" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.338870 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkdsm\" (UniqueName: \"kubernetes.io/projected/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f-kube-api-access-jkdsm\") pod \"nova-cell0-2831-account-create-update-f6wxs\" (UID: \"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f\") " pod="openstack/nova-cell0-2831-account-create-update-f6wxs" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.392239 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2831-account-create-update-f6wxs" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.420842 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2r24\" (UniqueName: \"kubernetes.io/projected/c97a7ddc-c557-4d5c-80d7-60fd099d192d-kube-api-access-n2r24\") pod \"nova-cell1-da95-account-create-update-ctl6c\" (UID: \"c97a7ddc-c557-4d5c-80d7-60fd099d192d\") " pod="openstack/nova-cell1-da95-account-create-update-ctl6c" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.421358 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c97a7ddc-c557-4d5c-80d7-60fd099d192d-operator-scripts\") pod \"nova-cell1-da95-account-create-update-ctl6c\" (UID: \"c97a7ddc-c557-4d5c-80d7-60fd099d192d\") " pod="openstack/nova-cell1-da95-account-create-update-ctl6c" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.423087 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c97a7ddc-c557-4d5c-80d7-60fd099d192d-operator-scripts\") pod \"nova-cell1-da95-account-create-update-ctl6c\" (UID: \"c97a7ddc-c557-4d5c-80d7-60fd099d192d\") " pod="openstack/nova-cell1-da95-account-create-update-ctl6c" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.444847 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2r24\" (UniqueName: \"kubernetes.io/projected/c97a7ddc-c557-4d5c-80d7-60fd099d192d-kube-api-access-n2r24\") pod \"nova-cell1-da95-account-create-update-ctl6c\" (UID: \"c97a7ddc-c557-4d5c-80d7-60fd099d192d\") " pod="openstack/nova-cell1-da95-account-create-update-ctl6c" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.556444 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da95-account-create-update-ctl6c" Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.600567 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1433dd20-acb3-442a-a720-81d9c7a7251f","Type":"ContainerStarted","Data":"a5160ea8f85cb8cdb5b97de2525b7ff5ada408ef00c82a695c95271a7ecd260b"} Jan 26 15:08:04 crc kubenswrapper[4823]: W0126 15:08:04.688293 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77121317_dc3e_497c_878a_b3077fef4920.slice/crio-c6b11393390b2b691ffdfb03fd9923a74b336cefa82f5e781fa65d3999f053b2 WatchSource:0}: Error finding container c6b11393390b2b691ffdfb03fd9923a74b336cefa82f5e781fa65d3999f053b2: Status 404 returned error can't find the container with id c6b11393390b2b691ffdfb03fd9923a74b336cefa82f5e781fa65d3999f053b2 Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.702692 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-97whl"] Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.913926 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-7w4md"] Jan 26 15:08:04 crc kubenswrapper[4823]: I0126 15:08:04.937046 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-cb5a-account-create-update-27lns"] Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.074586 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-9jh8h"] Jan 26 15:08:05 crc kubenswrapper[4823]: W0126 15:08:05.081820 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4119b40_4749_455b_9bba_68fdf24554a0.slice/crio-a2b645405b2d2b36c9bfc4992b465c98566bf4783eabaea398c2df7e1804c5cd WatchSource:0}: Error finding container a2b645405b2d2b36c9bfc4992b465c98566bf4783eabaea398c2df7e1804c5cd: Status 404 returned error can't find the container with id a2b645405b2d2b36c9bfc4992b465c98566bf4783eabaea398c2df7e1804c5cd Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.158691 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-da95-account-create-update-ctl6c"] Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.170609 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2831-account-create-update-f6wxs"] Jan 26 15:08:05 crc kubenswrapper[4823]: W0126 15:08:05.176061 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdd4861e_57bf_42d5_a4c8_afa4dbd0a79f.slice/crio-387b5ca2523dba371ebd9319ee91c7044511118e65d72ea271065376e950d895 WatchSource:0}: Error finding container 387b5ca2523dba371ebd9319ee91c7044511118e65d72ea271065376e950d895: Status 404 returned error can't find the container with id 387b5ca2523dba371ebd9319ee91c7044511118e65d72ea271065376e950d895 Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.613898 4823 generic.go:334] "Generic (PLEG): container finished" podID="c4119b40-4749-455b-9bba-68fdf24554a0" containerID="6c4a85ec665ab61c8ab8adc091cff924710e41b838317144fc7522187cc7eddc" exitCode=0 Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.614079 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9jh8h" event={"ID":"c4119b40-4749-455b-9bba-68fdf24554a0","Type":"ContainerDied","Data":"6c4a85ec665ab61c8ab8adc091cff924710e41b838317144fc7522187cc7eddc"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.615349 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9jh8h" event={"ID":"c4119b40-4749-455b-9bba-68fdf24554a0","Type":"ContainerStarted","Data":"a2b645405b2d2b36c9bfc4992b465c98566bf4783eabaea398c2df7e1804c5cd"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.617976 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1433dd20-acb3-442a-a720-81d9c7a7251f","Type":"ContainerStarted","Data":"4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.620931 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da95-account-create-update-ctl6c" event={"ID":"c97a7ddc-c557-4d5c-80d7-60fd099d192d","Type":"ContainerStarted","Data":"f644c79cade7b2e43f64288bd4a71c707b7c7294998c671c2b2cec1ddfbea982"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.620968 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da95-account-create-update-ctl6c" event={"ID":"c97a7ddc-c557-4d5c-80d7-60fd099d192d","Type":"ContainerStarted","Data":"d4c0e6824cbd6f0f81882969bb3b0c1b01f0724b09cbba9d2997447912eea6a3"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.621976 4823 generic.go:334] "Generic (PLEG): container finished" podID="7610ca35-36e4-45dd-b20d-e0ea80b3f62d" containerID="48b196c1f4a1cf16909793a13f358ff60dfb0e0196ead0c7023803be3958c56a" exitCode=0 Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.622038 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7w4md" event={"ID":"7610ca35-36e4-45dd-b20d-e0ea80b3f62d","Type":"ContainerDied","Data":"48b196c1f4a1cf16909793a13f358ff60dfb0e0196ead0c7023803be3958c56a"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.622062 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7w4md" event={"ID":"7610ca35-36e4-45dd-b20d-e0ea80b3f62d","Type":"ContainerStarted","Data":"dec014966dcc97314dcc7be7294181ca87a61e3087d5e72c8eb49a1bc0fb8f19"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.624876 4823 generic.go:334] "Generic (PLEG): container finished" podID="77121317-dc3e-497c-878a-b3077fef4920" containerID="04c589ae20d714bef4a03f8fda76536be574017b4b25e6a5c8710fd5544a948c" exitCode=0 Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.624963 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-97whl" event={"ID":"77121317-dc3e-497c-878a-b3077fef4920","Type":"ContainerDied","Data":"04c589ae20d714bef4a03f8fda76536be574017b4b25e6a5c8710fd5544a948c"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.625933 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-97whl" event={"ID":"77121317-dc3e-497c-878a-b3077fef4920","Type":"ContainerStarted","Data":"c6b11393390b2b691ffdfb03fd9923a74b336cefa82f5e781fa65d3999f053b2"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.631959 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2831-account-create-update-f6wxs" event={"ID":"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f","Type":"ContainerStarted","Data":"bc55c1811af913365a6c6392936f2e02461e211ddf16ab71c100c3545cd4a870"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.632038 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2831-account-create-update-f6wxs" event={"ID":"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f","Type":"ContainerStarted","Data":"387b5ca2523dba371ebd9319ee91c7044511118e65d72ea271065376e950d895"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.637644 4823 generic.go:334] "Generic (PLEG): container finished" podID="c430bc54-e863-4d5d-bb23-0f54084f28a0" containerID="9bbc7a504b2809ba634cb5b53d12302e8ae9f801da5448329bd4e42de98d60aa" exitCode=0 Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.637714 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-cb5a-account-create-update-27lns" event={"ID":"c430bc54-e863-4d5d-bb23-0f54084f28a0","Type":"ContainerDied","Data":"9bbc7a504b2809ba634cb5b53d12302e8ae9f801da5448329bd4e42de98d60aa"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.637749 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-cb5a-account-create-update-27lns" event={"ID":"c430bc54-e863-4d5d-bb23-0f54084f28a0","Type":"ContainerStarted","Data":"d6d1314dfacd626de3d36fd626f6bd9ff70ad393a465b76c8cd56aeb158f96e4"} Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.679349 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-2831-account-create-update-f6wxs" podStartSLOduration=1.679324641 podStartE2EDuration="1.679324641s" podCreationTimestamp="2026-01-26 15:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:08:05.671881118 +0000 UTC m=+1282.357344233" watchObservedRunningTime="2026-01-26 15:08:05.679324641 +0000 UTC m=+1282.364787746" Jan 26 15:08:05 crc kubenswrapper[4823]: I0126 15:08:05.714143 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-da95-account-create-update-ctl6c" podStartSLOduration=1.714116342 podStartE2EDuration="1.714116342s" podCreationTimestamp="2026-01-26 15:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:08:05.701597379 +0000 UTC m=+1282.387060474" watchObservedRunningTime="2026-01-26 15:08:05.714116342 +0000 UTC m=+1282.399579447" Jan 26 15:08:06 crc kubenswrapper[4823]: I0126 15:08:06.670424 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1433dd20-acb3-442a-a720-81d9c7a7251f","Type":"ContainerStarted","Data":"59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77"} Jan 26 15:08:06 crc kubenswrapper[4823]: I0126 15:08:06.677066 4823 generic.go:334] "Generic (PLEG): container finished" podID="c97a7ddc-c557-4d5c-80d7-60fd099d192d" containerID="f644c79cade7b2e43f64288bd4a71c707b7c7294998c671c2b2cec1ddfbea982" exitCode=0 Jan 26 15:08:06 crc kubenswrapper[4823]: I0126 15:08:06.677164 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da95-account-create-update-ctl6c" event={"ID":"c97a7ddc-c557-4d5c-80d7-60fd099d192d","Type":"ContainerDied","Data":"f644c79cade7b2e43f64288bd4a71c707b7c7294998c671c2b2cec1ddfbea982"} Jan 26 15:08:06 crc kubenswrapper[4823]: I0126 15:08:06.681338 4823 generic.go:334] "Generic (PLEG): container finished" podID="cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f" containerID="bc55c1811af913365a6c6392936f2e02461e211ddf16ab71c100c3545cd4a870" exitCode=0 Jan 26 15:08:06 crc kubenswrapper[4823]: I0126 15:08:06.682267 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2831-account-create-update-f6wxs" event={"ID":"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f","Type":"ContainerDied","Data":"bc55c1811af913365a6c6392936f2e02461e211ddf16ab71c100c3545cd4a870"} Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.134657 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-cb5a-account-create-update-27lns" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.277570 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9jh8h" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.292563 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c430bc54-e863-4d5d-bb23-0f54084f28a0-operator-scripts\") pod \"c430bc54-e863-4d5d-bb23-0f54084f28a0\" (UID: \"c430bc54-e863-4d5d-bb23-0f54084f28a0\") " Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.300346 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f5nl\" (UniqueName: \"kubernetes.io/projected/c430bc54-e863-4d5d-bb23-0f54084f28a0-kube-api-access-2f5nl\") pod \"c430bc54-e863-4d5d-bb23-0f54084f28a0\" (UID: \"c430bc54-e863-4d5d-bb23-0f54084f28a0\") " Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.293755 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c430bc54-e863-4d5d-bb23-0f54084f28a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c430bc54-e863-4d5d-bb23-0f54084f28a0" (UID: "c430bc54-e863-4d5d-bb23-0f54084f28a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.304408 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c430bc54-e863-4d5d-bb23-0f54084f28a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.309585 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c430bc54-e863-4d5d-bb23-0f54084f28a0-kube-api-access-2f5nl" (OuterVolumeSpecName: "kube-api-access-2f5nl") pod "c430bc54-e863-4d5d-bb23-0f54084f28a0" (UID: "c430bc54-e863-4d5d-bb23-0f54084f28a0"). InnerVolumeSpecName "kube-api-access-2f5nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.316056 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7w4md" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.320301 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-97whl" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.405882 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7610ca35-36e4-45dd-b20d-e0ea80b3f62d-operator-scripts\") pod \"7610ca35-36e4-45dd-b20d-e0ea80b3f62d\" (UID: \"7610ca35-36e4-45dd-b20d-e0ea80b3f62d\") " Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.405949 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx5pc\" (UniqueName: \"kubernetes.io/projected/7610ca35-36e4-45dd-b20d-e0ea80b3f62d-kube-api-access-gx5pc\") pod \"7610ca35-36e4-45dd-b20d-e0ea80b3f62d\" (UID: \"7610ca35-36e4-45dd-b20d-e0ea80b3f62d\") " Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.406018 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77121317-dc3e-497c-878a-b3077fef4920-operator-scripts\") pod \"77121317-dc3e-497c-878a-b3077fef4920\" (UID: \"77121317-dc3e-497c-878a-b3077fef4920\") " Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.406112 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjxbv\" (UniqueName: \"kubernetes.io/projected/77121317-dc3e-497c-878a-b3077fef4920-kube-api-access-qjxbv\") pod \"77121317-dc3e-497c-878a-b3077fef4920\" (UID: \"77121317-dc3e-497c-878a-b3077fef4920\") " Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.406158 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8n2g\" (UniqueName: \"kubernetes.io/projected/c4119b40-4749-455b-9bba-68fdf24554a0-kube-api-access-w8n2g\") pod \"c4119b40-4749-455b-9bba-68fdf24554a0\" (UID: \"c4119b40-4749-455b-9bba-68fdf24554a0\") " Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.406204 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4119b40-4749-455b-9bba-68fdf24554a0-operator-scripts\") pod \"c4119b40-4749-455b-9bba-68fdf24554a0\" (UID: \"c4119b40-4749-455b-9bba-68fdf24554a0\") " Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.406649 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f5nl\" (UniqueName: \"kubernetes.io/projected/c430bc54-e863-4d5d-bb23-0f54084f28a0-kube-api-access-2f5nl\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.406665 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7610ca35-36e4-45dd-b20d-e0ea80b3f62d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7610ca35-36e4-45dd-b20d-e0ea80b3f62d" (UID: "7610ca35-36e4-45dd-b20d-e0ea80b3f62d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.407069 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4119b40-4749-455b-9bba-68fdf24554a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4119b40-4749-455b-9bba-68fdf24554a0" (UID: "c4119b40-4749-455b-9bba-68fdf24554a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.407519 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77121317-dc3e-497c-878a-b3077fef4920-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "77121317-dc3e-497c-878a-b3077fef4920" (UID: "77121317-dc3e-497c-878a-b3077fef4920"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.410157 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4119b40-4749-455b-9bba-68fdf24554a0-kube-api-access-w8n2g" (OuterVolumeSpecName: "kube-api-access-w8n2g") pod "c4119b40-4749-455b-9bba-68fdf24554a0" (UID: "c4119b40-4749-455b-9bba-68fdf24554a0"). InnerVolumeSpecName "kube-api-access-w8n2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.411435 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7610ca35-36e4-45dd-b20d-e0ea80b3f62d-kube-api-access-gx5pc" (OuterVolumeSpecName: "kube-api-access-gx5pc") pod "7610ca35-36e4-45dd-b20d-e0ea80b3f62d" (UID: "7610ca35-36e4-45dd-b20d-e0ea80b3f62d"). InnerVolumeSpecName "kube-api-access-gx5pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.411712 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77121317-dc3e-497c-878a-b3077fef4920-kube-api-access-qjxbv" (OuterVolumeSpecName: "kube-api-access-qjxbv") pod "77121317-dc3e-497c-878a-b3077fef4920" (UID: "77121317-dc3e-497c-878a-b3077fef4920"). InnerVolumeSpecName "kube-api-access-qjxbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.507908 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7610ca35-36e4-45dd-b20d-e0ea80b3f62d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.507949 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx5pc\" (UniqueName: \"kubernetes.io/projected/7610ca35-36e4-45dd-b20d-e0ea80b3f62d-kube-api-access-gx5pc\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.507961 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/77121317-dc3e-497c-878a-b3077fef4920-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.507975 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjxbv\" (UniqueName: \"kubernetes.io/projected/77121317-dc3e-497c-878a-b3077fef4920-kube-api-access-qjxbv\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.507987 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8n2g\" (UniqueName: \"kubernetes.io/projected/c4119b40-4749-455b-9bba-68fdf24554a0-kube-api-access-w8n2g\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.507995 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4119b40-4749-455b-9bba-68fdf24554a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.690828 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-9jh8h" event={"ID":"c4119b40-4749-455b-9bba-68fdf24554a0","Type":"ContainerDied","Data":"a2b645405b2d2b36c9bfc4992b465c98566bf4783eabaea398c2df7e1804c5cd"} Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.690880 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2b645405b2d2b36c9bfc4992b465c98566bf4783eabaea398c2df7e1804c5cd" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.690944 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-9jh8h" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.693269 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1433dd20-acb3-442a-a720-81d9c7a7251f","Type":"ContainerStarted","Data":"3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041"} Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.695287 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-7w4md" event={"ID":"7610ca35-36e4-45dd-b20d-e0ea80b3f62d","Type":"ContainerDied","Data":"dec014966dcc97314dcc7be7294181ca87a61e3087d5e72c8eb49a1bc0fb8f19"} Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.695314 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dec014966dcc97314dcc7be7294181ca87a61e3087d5e72c8eb49a1bc0fb8f19" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.695379 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-7w4md" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.696959 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-97whl" event={"ID":"77121317-dc3e-497c-878a-b3077fef4920","Type":"ContainerDied","Data":"c6b11393390b2b691ffdfb03fd9923a74b336cefa82f5e781fa65d3999f053b2"} Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.696983 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6b11393390b2b691ffdfb03fd9923a74b336cefa82f5e781fa65d3999f053b2" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.696991 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-97whl" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.699108 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-cb5a-account-create-update-27lns" Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.699174 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-cb5a-account-create-update-27lns" event={"ID":"c430bc54-e863-4d5d-bb23-0f54084f28a0","Type":"ContainerDied","Data":"d6d1314dfacd626de3d36fd626f6bd9ff70ad393a465b76c8cd56aeb158f96e4"} Jan 26 15:08:07 crc kubenswrapper[4823]: I0126 15:08:07.699196 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6d1314dfacd626de3d36fd626f6bd9ff70ad393a465b76c8cd56aeb158f96e4" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.175178 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da95-account-create-update-ctl6c" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.183555 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2831-account-create-update-f6wxs" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.323859 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f-operator-scripts\") pod \"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f\" (UID: \"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f\") " Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.324214 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c97a7ddc-c557-4d5c-80d7-60fd099d192d-operator-scripts\") pod \"c97a7ddc-c557-4d5c-80d7-60fd099d192d\" (UID: \"c97a7ddc-c557-4d5c-80d7-60fd099d192d\") " Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.324282 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkdsm\" (UniqueName: \"kubernetes.io/projected/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f-kube-api-access-jkdsm\") pod \"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f\" (UID: \"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f\") " Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.324455 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2r24\" (UniqueName: \"kubernetes.io/projected/c97a7ddc-c557-4d5c-80d7-60fd099d192d-kube-api-access-n2r24\") pod \"c97a7ddc-c557-4d5c-80d7-60fd099d192d\" (UID: \"c97a7ddc-c557-4d5c-80d7-60fd099d192d\") " Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.325498 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f" (UID: "cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.329097 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c97a7ddc-c557-4d5c-80d7-60fd099d192d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c97a7ddc-c557-4d5c-80d7-60fd099d192d" (UID: "c97a7ddc-c557-4d5c-80d7-60fd099d192d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.331861 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f-kube-api-access-jkdsm" (OuterVolumeSpecName: "kube-api-access-jkdsm") pod "cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f" (UID: "cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f"). InnerVolumeSpecName "kube-api-access-jkdsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.333650 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c97a7ddc-c557-4d5c-80d7-60fd099d192d-kube-api-access-n2r24" (OuterVolumeSpecName: "kube-api-access-n2r24") pod "c97a7ddc-c557-4d5c-80d7-60fd099d192d" (UID: "c97a7ddc-c557-4d5c-80d7-60fd099d192d"). InnerVolumeSpecName "kube-api-access-n2r24". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.426911 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c97a7ddc-c557-4d5c-80d7-60fd099d192d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.426955 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkdsm\" (UniqueName: \"kubernetes.io/projected/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f-kube-api-access-jkdsm\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.426965 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2r24\" (UniqueName: \"kubernetes.io/projected/c97a7ddc-c557-4d5c-80d7-60fd099d192d-kube-api-access-n2r24\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.426975 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.539173 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.728730 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-da95-account-create-update-ctl6c" event={"ID":"c97a7ddc-c557-4d5c-80d7-60fd099d192d","Type":"ContainerDied","Data":"d4c0e6824cbd6f0f81882969bb3b0c1b01f0724b09cbba9d2997447912eea6a3"} Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.729229 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4c0e6824cbd6f0f81882969bb3b0c1b01f0724b09cbba9d2997447912eea6a3" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.728804 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-da95-account-create-update-ctl6c" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.732408 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2831-account-create-update-f6wxs" event={"ID":"cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f","Type":"ContainerDied","Data":"387b5ca2523dba371ebd9319ee91c7044511118e65d72ea271065376e950d895"} Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.732465 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="387b5ca2523dba371ebd9319ee91c7044511118e65d72ea271065376e950d895" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.732544 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2831-account-create-update-f6wxs" Jan 26 15:08:08 crc kubenswrapper[4823]: I0126 15:08:08.806818 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 26 15:08:09 crc kubenswrapper[4823]: I0126 15:08:09.747911 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1433dd20-acb3-442a-a720-81d9c7a7251f","Type":"ContainerStarted","Data":"b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363"} Jan 26 15:08:09 crc kubenswrapper[4823]: I0126 15:08:09.748672 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:08:09 crc kubenswrapper[4823]: I0126 15:08:09.776272 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.903942105 podStartE2EDuration="7.776250322s" podCreationTimestamp="2026-01-26 15:08:02 +0000 UTC" firstStartedPulling="2026-01-26 15:08:03.754519919 +0000 UTC m=+1280.439983024" lastFinishedPulling="2026-01-26 15:08:08.626828136 +0000 UTC m=+1285.312291241" observedRunningTime="2026-01-26 15:08:09.769674682 +0000 UTC m=+1286.455137807" watchObservedRunningTime="2026-01-26 15:08:09.776250322 +0000 UTC m=+1286.461713417" Jan 26 15:08:11 crc kubenswrapper[4823]: I0126 15:08:11.725455 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:11 crc kubenswrapper[4823]: I0126 15:08:11.766220 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="ceilometer-central-agent" containerID="cri-o://4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc" gracePeriod=30 Jan 26 15:08:11 crc kubenswrapper[4823]: I0126 15:08:11.766285 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="proxy-httpd" containerID="cri-o://b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363" gracePeriod=30 Jan 26 15:08:11 crc kubenswrapper[4823]: I0126 15:08:11.766356 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="sg-core" containerID="cri-o://3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041" gracePeriod=30 Jan 26 15:08:11 crc kubenswrapper[4823]: I0126 15:08:11.766346 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="ceilometer-notification-agent" containerID="cri-o://59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77" gracePeriod=30 Jan 26 15:08:12 crc kubenswrapper[4823]: I0126 15:08:12.791345 4823 generic.go:334] "Generic (PLEG): container finished" podID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerID="b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363" exitCode=0 Jan 26 15:08:12 crc kubenswrapper[4823]: I0126 15:08:12.791698 4823 generic.go:334] "Generic (PLEG): container finished" podID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerID="3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041" exitCode=2 Jan 26 15:08:12 crc kubenswrapper[4823]: I0126 15:08:12.791710 4823 generic.go:334] "Generic (PLEG): container finished" podID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerID="59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77" exitCode=0 Jan 26 15:08:12 crc kubenswrapper[4823]: I0126 15:08:12.791737 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1433dd20-acb3-442a-a720-81d9c7a7251f","Type":"ContainerDied","Data":"b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363"} Jan 26 15:08:12 crc kubenswrapper[4823]: I0126 15:08:12.791772 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1433dd20-acb3-442a-a720-81d9c7a7251f","Type":"ContainerDied","Data":"3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041"} Jan 26 15:08:12 crc kubenswrapper[4823]: I0126 15:08:12.791786 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1433dd20-acb3-442a-a720-81d9c7a7251f","Type":"ContainerDied","Data":"59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77"} Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.363982 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nhdz4"] Jan 26 15:08:14 crc kubenswrapper[4823]: E0126 15:08:14.364948 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97a7ddc-c557-4d5c-80d7-60fd099d192d" containerName="mariadb-account-create-update" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.364964 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97a7ddc-c557-4d5c-80d7-60fd099d192d" containerName="mariadb-account-create-update" Jan 26 15:08:14 crc kubenswrapper[4823]: E0126 15:08:14.364987 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f" containerName="mariadb-account-create-update" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.364994 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f" containerName="mariadb-account-create-update" Jan 26 15:08:14 crc kubenswrapper[4823]: E0126 15:08:14.365009 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c430bc54-e863-4d5d-bb23-0f54084f28a0" containerName="mariadb-account-create-update" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.365016 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c430bc54-e863-4d5d-bb23-0f54084f28a0" containerName="mariadb-account-create-update" Jan 26 15:08:14 crc kubenswrapper[4823]: E0126 15:08:14.365032 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4119b40-4749-455b-9bba-68fdf24554a0" containerName="mariadb-database-create" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.365039 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4119b40-4749-455b-9bba-68fdf24554a0" containerName="mariadb-database-create" Jan 26 15:08:14 crc kubenswrapper[4823]: E0126 15:08:14.365054 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77121317-dc3e-497c-878a-b3077fef4920" containerName="mariadb-database-create" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.365060 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="77121317-dc3e-497c-878a-b3077fef4920" containerName="mariadb-database-create" Jan 26 15:08:14 crc kubenswrapper[4823]: E0126 15:08:14.365078 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7610ca35-36e4-45dd-b20d-e0ea80b3f62d" containerName="mariadb-database-create" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.365084 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7610ca35-36e4-45dd-b20d-e0ea80b3f62d" containerName="mariadb-database-create" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.366428 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97a7ddc-c557-4d5c-80d7-60fd099d192d" containerName="mariadb-account-create-update" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.366454 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f" containerName="mariadb-account-create-update" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.366465 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4119b40-4749-455b-9bba-68fdf24554a0" containerName="mariadb-database-create" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.366472 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7610ca35-36e4-45dd-b20d-e0ea80b3f62d" containerName="mariadb-database-create" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.366481 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="77121317-dc3e-497c-878a-b3077fef4920" containerName="mariadb-database-create" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.366492 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c430bc54-e863-4d5d-bb23-0f54084f28a0" containerName="mariadb-account-create-update" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.367259 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.374249 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-c8vfx" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.374549 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.377232 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.387694 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nhdz4"] Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.453301 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-config-data\") pod \"nova-cell0-conductor-db-sync-nhdz4\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.453415 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nhdz4\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.453459 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d597j\" (UniqueName: \"kubernetes.io/projected/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-kube-api-access-d597j\") pod \"nova-cell0-conductor-db-sync-nhdz4\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.453499 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-scripts\") pod \"nova-cell0-conductor-db-sync-nhdz4\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.477712 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.554594 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-config-data\") pod \"1433dd20-acb3-442a-a720-81d9c7a7251f\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.554651 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1433dd20-acb3-442a-a720-81d9c7a7251f-run-httpd\") pod \"1433dd20-acb3-442a-a720-81d9c7a7251f\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.554746 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-sg-core-conf-yaml\") pod \"1433dd20-acb3-442a-a720-81d9c7a7251f\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.554788 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-combined-ca-bundle\") pod \"1433dd20-acb3-442a-a720-81d9c7a7251f\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.554837 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgfd5\" (UniqueName: \"kubernetes.io/projected/1433dd20-acb3-442a-a720-81d9c7a7251f-kube-api-access-kgfd5\") pod \"1433dd20-acb3-442a-a720-81d9c7a7251f\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.555022 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1433dd20-acb3-442a-a720-81d9c7a7251f-log-httpd\") pod \"1433dd20-acb3-442a-a720-81d9c7a7251f\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.555185 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-scripts\") pod \"1433dd20-acb3-442a-a720-81d9c7a7251f\" (UID: \"1433dd20-acb3-442a-a720-81d9c7a7251f\") " Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.555488 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-config-data\") pod \"nova-cell0-conductor-db-sync-nhdz4\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.555583 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nhdz4\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.555625 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d597j\" (UniqueName: \"kubernetes.io/projected/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-kube-api-access-d597j\") pod \"nova-cell0-conductor-db-sync-nhdz4\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.555683 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-scripts\") pod \"nova-cell0-conductor-db-sync-nhdz4\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.556021 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1433dd20-acb3-442a-a720-81d9c7a7251f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1433dd20-acb3-442a-a720-81d9c7a7251f" (UID: "1433dd20-acb3-442a-a720-81d9c7a7251f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.556679 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1433dd20-acb3-442a-a720-81d9c7a7251f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1433dd20-acb3-442a-a720-81d9c7a7251f" (UID: "1433dd20-acb3-442a-a720-81d9c7a7251f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.565278 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-config-data\") pod \"nova-cell0-conductor-db-sync-nhdz4\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.566601 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-scripts\") pod \"nova-cell0-conductor-db-sync-nhdz4\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.571028 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1433dd20-acb3-442a-a720-81d9c7a7251f-kube-api-access-kgfd5" (OuterVolumeSpecName: "kube-api-access-kgfd5") pod "1433dd20-acb3-442a-a720-81d9c7a7251f" (UID: "1433dd20-acb3-442a-a720-81d9c7a7251f"). InnerVolumeSpecName "kube-api-access-kgfd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.571482 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nhdz4\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.573061 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-scripts" (OuterVolumeSpecName: "scripts") pod "1433dd20-acb3-442a-a720-81d9c7a7251f" (UID: "1433dd20-acb3-442a-a720-81d9c7a7251f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.595196 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1433dd20-acb3-442a-a720-81d9c7a7251f" (UID: "1433dd20-acb3-442a-a720-81d9c7a7251f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.607432 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d597j\" (UniqueName: \"kubernetes.io/projected/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-kube-api-access-d597j\") pod \"nova-cell0-conductor-db-sync-nhdz4\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.657570 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1433dd20-acb3-442a-a720-81d9c7a7251f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.658027 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.658040 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1433dd20-acb3-442a-a720-81d9c7a7251f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.658048 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.658059 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgfd5\" (UniqueName: \"kubernetes.io/projected/1433dd20-acb3-442a-a720-81d9c7a7251f-kube-api-access-kgfd5\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.677530 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1433dd20-acb3-442a-a720-81d9c7a7251f" (UID: "1433dd20-acb3-442a-a720-81d9c7a7251f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.685656 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-config-data" (OuterVolumeSpecName: "config-data") pod "1433dd20-acb3-442a-a720-81d9c7a7251f" (UID: "1433dd20-acb3-442a-a720-81d9c7a7251f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.759696 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.759747 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1433dd20-acb3-442a-a720-81d9c7a7251f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.769133 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.820057 4823 generic.go:334] "Generic (PLEG): container finished" podID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerID="4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc" exitCode=0 Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.820114 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1433dd20-acb3-442a-a720-81d9c7a7251f","Type":"ContainerDied","Data":"4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc"} Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.820149 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1433dd20-acb3-442a-a720-81d9c7a7251f","Type":"ContainerDied","Data":"a5160ea8f85cb8cdb5b97de2525b7ff5ada408ef00c82a695c95271a7ecd260b"} Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.820169 4823 scope.go:117] "RemoveContainer" containerID="b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.820355 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.877659 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.892537 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.902691 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:14 crc kubenswrapper[4823]: E0126 15:08:14.903466 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="ceilometer-central-agent" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.903566 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="ceilometer-central-agent" Jan 26 15:08:14 crc kubenswrapper[4823]: E0126 15:08:14.903660 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="sg-core" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.903734 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="sg-core" Jan 26 15:08:14 crc kubenswrapper[4823]: E0126 15:08:14.903814 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="proxy-httpd" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.903902 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="proxy-httpd" Jan 26 15:08:14 crc kubenswrapper[4823]: E0126 15:08:14.904001 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="ceilometer-notification-agent" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.904079 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="ceilometer-notification-agent" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.904427 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="proxy-httpd" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.904519 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="ceilometer-notification-agent" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.904621 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="ceilometer-central-agent" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.904714 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" containerName="sg-core" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.905095 4823 scope.go:117] "RemoveContainer" containerID="3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.907689 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.912134 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.913141 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.923166 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.958574 4823 scope.go:117] "RemoveContainer" containerID="59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.963807 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfmhv\" (UniqueName: \"kubernetes.io/projected/b0b8612d-bb82-4943-829f-b15ef3ed8cef-kube-api-access-lfmhv\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.963902 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.963982 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0b8612d-bb82-4943-829f-b15ef3ed8cef-log-httpd\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.964031 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-config-data\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.964070 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-scripts\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.964113 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0b8612d-bb82-4943-829f-b15ef3ed8cef-run-httpd\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:14 crc kubenswrapper[4823]: I0126 15:08:14.964217 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.014505 4823 scope.go:117] "RemoveContainer" containerID="4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.066966 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.067070 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfmhv\" (UniqueName: \"kubernetes.io/projected/b0b8612d-bb82-4943-829f-b15ef3ed8cef-kube-api-access-lfmhv\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.067113 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.067170 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0b8612d-bb82-4943-829f-b15ef3ed8cef-log-httpd\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.067206 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-config-data\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.067250 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-scripts\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.067274 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0b8612d-bb82-4943-829f-b15ef3ed8cef-run-httpd\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.068667 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0b8612d-bb82-4943-829f-b15ef3ed8cef-run-httpd\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.068936 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0b8612d-bb82-4943-829f-b15ef3ed8cef-log-httpd\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.075624 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.076602 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.087261 4823 scope.go:117] "RemoveContainer" containerID="b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.089101 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-scripts\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: E0126 15:08:15.089333 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363\": container with ID starting with b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363 not found: ID does not exist" containerID="b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.089402 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363"} err="failed to get container status \"b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363\": rpc error: code = NotFound desc = could not find container \"b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363\": container with ID starting with b71360a4d7f575efb0c112f7b57008b62174829d5b3799b446e8f47872b95363 not found: ID does not exist" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.089435 4823 scope.go:117] "RemoveContainer" containerID="3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041" Jan 26 15:08:15 crc kubenswrapper[4823]: E0126 15:08:15.090445 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041\": container with ID starting with 3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041 not found: ID does not exist" containerID="3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.090469 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041"} err="failed to get container status \"3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041\": rpc error: code = NotFound desc = could not find container \"3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041\": container with ID starting with 3f816378078b2b8a0f922f81c691f129c6cf93dbb4172ed56f2348e92eb07041 not found: ID does not exist" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.090485 4823 scope.go:117] "RemoveContainer" containerID="59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77" Jan 26 15:08:15 crc kubenswrapper[4823]: E0126 15:08:15.094541 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77\": container with ID starting with 59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77 not found: ID does not exist" containerID="59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.094576 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77"} err="failed to get container status \"59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77\": rpc error: code = NotFound desc = could not find container \"59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77\": container with ID starting with 59687fbc8fde1c167d3a425db6335728835f857a9629fb91710fff6bb80a5c77 not found: ID does not exist" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.094594 4823 scope.go:117] "RemoveContainer" containerID="4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc" Jan 26 15:08:15 crc kubenswrapper[4823]: E0126 15:08:15.095433 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc\": container with ID starting with 4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc not found: ID does not exist" containerID="4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.095456 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc"} err="failed to get container status \"4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc\": rpc error: code = NotFound desc = could not find container \"4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc\": container with ID starting with 4cb27e006ec7b3b4868dc3ac76ab8d4f7985f0e458aabaa93f6473e63d6f4ffc not found: ID does not exist" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.101789 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-config-data\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.106896 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfmhv\" (UniqueName: \"kubernetes.io/projected/b0b8612d-bb82-4943-829f-b15ef3ed8cef-kube-api-access-lfmhv\") pod \"ceilometer-0\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.236465 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.304822 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nhdz4"] Jan 26 15:08:15 crc kubenswrapper[4823]: W0126 15:08:15.343405 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcfb508b_ce02_4bc4_a362_b309ece5fd3c.slice/crio-f478d6a652c5626dd9f65c7e1d045fa78f0d83ab410911f26aac5b980964edfd WatchSource:0}: Error finding container f478d6a652c5626dd9f65c7e1d045fa78f0d83ab410911f26aac5b980964edfd: Status 404 returned error can't find the container with id f478d6a652c5626dd9f65c7e1d045fa78f0d83ab410911f26aac5b980964edfd Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.571925 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1433dd20-acb3-442a-a720-81d9c7a7251f" path="/var/lib/kubelet/pods/1433dd20-acb3-442a-a720-81d9c7a7251f/volumes" Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.781052 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.832696 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nhdz4" event={"ID":"dcfb508b-ce02-4bc4-a362-b309ece5fd3c","Type":"ContainerStarted","Data":"f478d6a652c5626dd9f65c7e1d045fa78f0d83ab410911f26aac5b980964edfd"} Jan 26 15:08:15 crc kubenswrapper[4823]: I0126 15:08:15.836000 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0b8612d-bb82-4943-829f-b15ef3ed8cef","Type":"ContainerStarted","Data":"73d93efd9885cac1d70331a4341a9b11fc07f0863cf0cff23132776f37492c98"} Jan 26 15:08:16 crc kubenswrapper[4823]: I0126 15:08:16.849498 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0b8612d-bb82-4943-829f-b15ef3ed8cef","Type":"ContainerStarted","Data":"454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8"} Jan 26 15:08:17 crc kubenswrapper[4823]: I0126 15:08:17.861439 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0b8612d-bb82-4943-829f-b15ef3ed8cef","Type":"ContainerStarted","Data":"e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6"} Jan 26 15:08:18 crc kubenswrapper[4823]: I0126 15:08:18.874300 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0b8612d-bb82-4943-829f-b15ef3ed8cef","Type":"ContainerStarted","Data":"a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6"} Jan 26 15:08:23 crc kubenswrapper[4823]: I0126 15:08:23.945470 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0b8612d-bb82-4943-829f-b15ef3ed8cef","Type":"ContainerStarted","Data":"fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83"} Jan 26 15:08:23 crc kubenswrapper[4823]: I0126 15:08:23.946292 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:08:23 crc kubenswrapper[4823]: I0126 15:08:23.949015 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nhdz4" event={"ID":"dcfb508b-ce02-4bc4-a362-b309ece5fd3c","Type":"ContainerStarted","Data":"0458cfd782b301eded2db5ebb278824d6f5179cc8b7b0cbadbb35ac7a99f3aa6"} Jan 26 15:08:23 crc kubenswrapper[4823]: I0126 15:08:23.976049 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.884869962 podStartE2EDuration="9.975969102s" podCreationTimestamp="2026-01-26 15:08:14 +0000 UTC" firstStartedPulling="2026-01-26 15:08:15.786298435 +0000 UTC m=+1292.471761540" lastFinishedPulling="2026-01-26 15:08:22.877397575 +0000 UTC m=+1299.562860680" observedRunningTime="2026-01-26 15:08:23.974230575 +0000 UTC m=+1300.659693690" watchObservedRunningTime="2026-01-26 15:08:23.975969102 +0000 UTC m=+1300.661432207" Jan 26 15:08:24 crc kubenswrapper[4823]: I0126 15:08:24.001291 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-nhdz4" podStartSLOduration=2.465672437 podStartE2EDuration="10.001258492s" podCreationTimestamp="2026-01-26 15:08:14 +0000 UTC" firstStartedPulling="2026-01-26 15:08:15.346036136 +0000 UTC m=+1292.031499251" lastFinishedPulling="2026-01-26 15:08:22.881622201 +0000 UTC m=+1299.567085306" observedRunningTime="2026-01-26 15:08:23.995027753 +0000 UTC m=+1300.680490858" watchObservedRunningTime="2026-01-26 15:08:24.001258492 +0000 UTC m=+1300.686721597" Jan 26 15:08:26 crc kubenswrapper[4823]: I0126 15:08:26.077181 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:26 crc kubenswrapper[4823]: I0126 15:08:26.079033 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="ceilometer-central-agent" containerID="cri-o://454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8" gracePeriod=30 Jan 26 15:08:26 crc kubenswrapper[4823]: I0126 15:08:26.079135 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="proxy-httpd" containerID="cri-o://fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83" gracePeriod=30 Jan 26 15:08:26 crc kubenswrapper[4823]: I0126 15:08:26.079110 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="sg-core" containerID="cri-o://a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6" gracePeriod=30 Jan 26 15:08:26 crc kubenswrapper[4823]: I0126 15:08:26.079252 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="ceilometer-notification-agent" containerID="cri-o://e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6" gracePeriod=30 Jan 26 15:08:26 crc kubenswrapper[4823]: E0126 15:08:26.941223 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0b8612d_bb82_4943_829f_b15ef3ed8cef.slice/crio-conmon-454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:08:26 crc kubenswrapper[4823]: I0126 15:08:26.978287 4823 generic.go:334] "Generic (PLEG): container finished" podID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerID="fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83" exitCode=0 Jan 26 15:08:26 crc kubenswrapper[4823]: I0126 15:08:26.978321 4823 generic.go:334] "Generic (PLEG): container finished" podID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerID="a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6" exitCode=2 Jan 26 15:08:26 crc kubenswrapper[4823]: I0126 15:08:26.978332 4823 generic.go:334] "Generic (PLEG): container finished" podID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerID="454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8" exitCode=0 Jan 26 15:08:26 crc kubenswrapper[4823]: I0126 15:08:26.978374 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0b8612d-bb82-4943-829f-b15ef3ed8cef","Type":"ContainerDied","Data":"fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83"} Jan 26 15:08:26 crc kubenswrapper[4823]: I0126 15:08:26.978504 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0b8612d-bb82-4943-829f-b15ef3ed8cef","Type":"ContainerDied","Data":"a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6"} Jan 26 15:08:26 crc kubenswrapper[4823]: I0126 15:08:26.978519 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0b8612d-bb82-4943-829f-b15ef3ed8cef","Type":"ContainerDied","Data":"454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8"} Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.602708 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.742869 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0b8612d-bb82-4943-829f-b15ef3ed8cef-run-httpd\") pod \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.742923 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-config-data\") pod \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.743028 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-scripts\") pod \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.743104 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-combined-ca-bundle\") pod \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.743161 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-sg-core-conf-yaml\") pod \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.743242 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfmhv\" (UniqueName: \"kubernetes.io/projected/b0b8612d-bb82-4943-829f-b15ef3ed8cef-kube-api-access-lfmhv\") pod \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.743305 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0b8612d-bb82-4943-829f-b15ef3ed8cef-log-httpd\") pod \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.743400 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0b8612d-bb82-4943-829f-b15ef3ed8cef-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b0b8612d-bb82-4943-829f-b15ef3ed8cef" (UID: "b0b8612d-bb82-4943-829f-b15ef3ed8cef"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.743856 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0b8612d-bb82-4943-829f-b15ef3ed8cef-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b0b8612d-bb82-4943-829f-b15ef3ed8cef" (UID: "b0b8612d-bb82-4943-829f-b15ef3ed8cef"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.744071 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0b8612d-bb82-4943-829f-b15ef3ed8cef-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.744100 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b0b8612d-bb82-4943-829f-b15ef3ed8cef-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.754419 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-scripts" (OuterVolumeSpecName: "scripts") pod "b0b8612d-bb82-4943-829f-b15ef3ed8cef" (UID: "b0b8612d-bb82-4943-829f-b15ef3ed8cef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.756784 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b8612d-bb82-4943-829f-b15ef3ed8cef-kube-api-access-lfmhv" (OuterVolumeSpecName: "kube-api-access-lfmhv") pod "b0b8612d-bb82-4943-829f-b15ef3ed8cef" (UID: "b0b8612d-bb82-4943-829f-b15ef3ed8cef"). InnerVolumeSpecName "kube-api-access-lfmhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.775561 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b0b8612d-bb82-4943-829f-b15ef3ed8cef" (UID: "b0b8612d-bb82-4943-829f-b15ef3ed8cef"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.819862 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0b8612d-bb82-4943-829f-b15ef3ed8cef" (UID: "b0b8612d-bb82-4943-829f-b15ef3ed8cef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.845666 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-config-data" (OuterVolumeSpecName: "config-data") pod "b0b8612d-bb82-4943-829f-b15ef3ed8cef" (UID: "b0b8612d-bb82-4943-829f-b15ef3ed8cef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.846476 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-config-data\") pod \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\" (UID: \"b0b8612d-bb82-4943-829f-b15ef3ed8cef\") " Jan 26 15:08:27 crc kubenswrapper[4823]: W0126 15:08:27.846786 4823 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/b0b8612d-bb82-4943-829f-b15ef3ed8cef/volumes/kubernetes.io~secret/config-data Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.846807 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-config-data" (OuterVolumeSpecName: "config-data") pod "b0b8612d-bb82-4943-829f-b15ef3ed8cef" (UID: "b0b8612d-bb82-4943-829f-b15ef3ed8cef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.847159 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.847188 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.847199 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.847212 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b0b8612d-bb82-4943-829f-b15ef3ed8cef-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:27 crc kubenswrapper[4823]: I0126 15:08:27.847223 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfmhv\" (UniqueName: \"kubernetes.io/projected/b0b8612d-bb82-4943-829f-b15ef3ed8cef-kube-api-access-lfmhv\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.001272 4823 generic.go:334] "Generic (PLEG): container finished" podID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerID="e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6" exitCode=0 Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.001353 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0b8612d-bb82-4943-829f-b15ef3ed8cef","Type":"ContainerDied","Data":"e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6"} Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.001413 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b0b8612d-bb82-4943-829f-b15ef3ed8cef","Type":"ContainerDied","Data":"73d93efd9885cac1d70331a4341a9b11fc07f0863cf0cff23132776f37492c98"} Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.001440 4823 scope.go:117] "RemoveContainer" containerID="fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.001707 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.025750 4823 scope.go:117] "RemoveContainer" containerID="a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.053197 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.068109 4823 scope.go:117] "RemoveContainer" containerID="e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.079026 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.100253 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.100668 4823 scope.go:117] "RemoveContainer" containerID="454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8" Jan 26 15:08:28 crc kubenswrapper[4823]: E0126 15:08:28.100858 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="ceilometer-notification-agent" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.100885 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="ceilometer-notification-agent" Jan 26 15:08:28 crc kubenswrapper[4823]: E0126 15:08:28.100917 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="sg-core" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.100927 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="sg-core" Jan 26 15:08:28 crc kubenswrapper[4823]: E0126 15:08:28.100956 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="ceilometer-central-agent" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.100965 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="ceilometer-central-agent" Jan 26 15:08:28 crc kubenswrapper[4823]: E0126 15:08:28.100991 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="proxy-httpd" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.101001 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="proxy-httpd" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.101232 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="sg-core" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.101265 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="proxy-httpd" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.101279 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="ceilometer-central-agent" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.101291 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" containerName="ceilometer-notification-agent" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.103633 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.107571 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.107687 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.120674 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.137486 4823 scope.go:117] "RemoveContainer" containerID="fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83" Jan 26 15:08:28 crc kubenswrapper[4823]: E0126 15:08:28.137971 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83\": container with ID starting with fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83 not found: ID does not exist" containerID="fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.138039 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83"} err="failed to get container status \"fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83\": rpc error: code = NotFound desc = could not find container \"fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83\": container with ID starting with fbcb2ff600fd6240fb773ca3d391114f50d65d01391266c13cb4ccb8648aec83 not found: ID does not exist" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.138076 4823 scope.go:117] "RemoveContainer" containerID="a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6" Jan 26 15:08:28 crc kubenswrapper[4823]: E0126 15:08:28.138580 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6\": container with ID starting with a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6 not found: ID does not exist" containerID="a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.138632 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6"} err="failed to get container status \"a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6\": rpc error: code = NotFound desc = could not find container \"a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6\": container with ID starting with a55e3c67305ef669ef96abbd806f98ead593404e7647010f6d17900dfd6df0f6 not found: ID does not exist" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.138671 4823 scope.go:117] "RemoveContainer" containerID="e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6" Jan 26 15:08:28 crc kubenswrapper[4823]: E0126 15:08:28.138967 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6\": container with ID starting with e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6 not found: ID does not exist" containerID="e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.138995 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6"} err="failed to get container status \"e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6\": rpc error: code = NotFound desc = could not find container \"e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6\": container with ID starting with e0dd8bc88d0bb50bf13e327b12e5ce1b646426f4d480fa7e15eb36db9efd0ed6 not found: ID does not exist" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.139013 4823 scope.go:117] "RemoveContainer" containerID="454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8" Jan 26 15:08:28 crc kubenswrapper[4823]: E0126 15:08:28.139416 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8\": container with ID starting with 454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8 not found: ID does not exist" containerID="454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.139444 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8"} err="failed to get container status \"454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8\": rpc error: code = NotFound desc = could not find container \"454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8\": container with ID starting with 454565e254bc174a4d8fc36913bbe9a690a34995002d72d1a705ec4dc233e6a8 not found: ID does not exist" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.259126 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.259234 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-config-data\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.259330 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f54wn\" (UniqueName: \"kubernetes.io/projected/ac87bc60-d424-40d1-913b-14d363dc5b1b-kube-api-access-f54wn\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.259372 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.259417 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac87bc60-d424-40d1-913b-14d363dc5b1b-run-httpd\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.259450 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac87bc60-d424-40d1-913b-14d363dc5b1b-log-httpd\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.259499 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-scripts\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.361636 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac87bc60-d424-40d1-913b-14d363dc5b1b-run-httpd\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.362018 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac87bc60-d424-40d1-913b-14d363dc5b1b-log-httpd\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.362078 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-scripts\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.362161 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.362214 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-config-data\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.362273 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f54wn\" (UniqueName: \"kubernetes.io/projected/ac87bc60-d424-40d1-913b-14d363dc5b1b-kube-api-access-f54wn\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.362333 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.362689 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac87bc60-d424-40d1-913b-14d363dc5b1b-log-httpd\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.362695 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac87bc60-d424-40d1-913b-14d363dc5b1b-run-httpd\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.368748 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-scripts\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.370517 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.371838 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-config-data\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.375470 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.388275 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f54wn\" (UniqueName: \"kubernetes.io/projected/ac87bc60-d424-40d1-913b-14d363dc5b1b-kube-api-access-f54wn\") pod \"ceilometer-0\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.430108 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:08:28 crc kubenswrapper[4823]: I0126 15:08:28.912912 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:08:29 crc kubenswrapper[4823]: I0126 15:08:29.015731 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac87bc60-d424-40d1-913b-14d363dc5b1b","Type":"ContainerStarted","Data":"351b62b812a717b97a6f6d5b2ab0ae87780eaa267f9b41a2117c966f8dc84846"} Jan 26 15:08:29 crc kubenswrapper[4823]: I0126 15:08:29.576767 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b8612d-bb82-4943-829f-b15ef3ed8cef" path="/var/lib/kubelet/pods/b0b8612d-bb82-4943-829f-b15ef3ed8cef/volumes" Jan 26 15:08:30 crc kubenswrapper[4823]: I0126 15:08:30.030762 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac87bc60-d424-40d1-913b-14d363dc5b1b","Type":"ContainerStarted","Data":"927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f"} Jan 26 15:08:31 crc kubenswrapper[4823]: I0126 15:08:31.049980 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac87bc60-d424-40d1-913b-14d363dc5b1b","Type":"ContainerStarted","Data":"7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa"} Jan 26 15:08:31 crc kubenswrapper[4823]: I0126 15:08:31.050832 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac87bc60-d424-40d1-913b-14d363dc5b1b","Type":"ContainerStarted","Data":"96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d"} Jan 26 15:08:33 crc kubenswrapper[4823]: I0126 15:08:33.094643 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac87bc60-d424-40d1-913b-14d363dc5b1b","Type":"ContainerStarted","Data":"e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b"} Jan 26 15:08:33 crc kubenswrapper[4823]: I0126 15:08:33.095865 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:08:33 crc kubenswrapper[4823]: I0126 15:08:33.129395 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.7599410089999998 podStartE2EDuration="5.129348751s" podCreationTimestamp="2026-01-26 15:08:28 +0000 UTC" firstStartedPulling="2026-01-26 15:08:28.924012158 +0000 UTC m=+1305.609475263" lastFinishedPulling="2026-01-26 15:08:32.2934199 +0000 UTC m=+1308.978883005" observedRunningTime="2026-01-26 15:08:33.122971307 +0000 UTC m=+1309.808434432" watchObservedRunningTime="2026-01-26 15:08:33.129348751 +0000 UTC m=+1309.814811856" Jan 26 15:08:36 crc kubenswrapper[4823]: I0126 15:08:36.146672 4823 generic.go:334] "Generic (PLEG): container finished" podID="dcfb508b-ce02-4bc4-a362-b309ece5fd3c" containerID="0458cfd782b301eded2db5ebb278824d6f5179cc8b7b0cbadbb35ac7a99f3aa6" exitCode=0 Jan 26 15:08:36 crc kubenswrapper[4823]: I0126 15:08:36.146733 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nhdz4" event={"ID":"dcfb508b-ce02-4bc4-a362-b309ece5fd3c","Type":"ContainerDied","Data":"0458cfd782b301eded2db5ebb278824d6f5179cc8b7b0cbadbb35ac7a99f3aa6"} Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.530990 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.672972 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-combined-ca-bundle\") pod \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.673251 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-config-data\") pod \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.673320 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d597j\" (UniqueName: \"kubernetes.io/projected/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-kube-api-access-d597j\") pod \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.673497 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-scripts\") pod \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\" (UID: \"dcfb508b-ce02-4bc4-a362-b309ece5fd3c\") " Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.680145 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-scripts" (OuterVolumeSpecName: "scripts") pod "dcfb508b-ce02-4bc4-a362-b309ece5fd3c" (UID: "dcfb508b-ce02-4bc4-a362-b309ece5fd3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.696403 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-kube-api-access-d597j" (OuterVolumeSpecName: "kube-api-access-d597j") pod "dcfb508b-ce02-4bc4-a362-b309ece5fd3c" (UID: "dcfb508b-ce02-4bc4-a362-b309ece5fd3c"). InnerVolumeSpecName "kube-api-access-d597j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.705286 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcfb508b-ce02-4bc4-a362-b309ece5fd3c" (UID: "dcfb508b-ce02-4bc4-a362-b309ece5fd3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.736620 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-config-data" (OuterVolumeSpecName: "config-data") pod "dcfb508b-ce02-4bc4-a362-b309ece5fd3c" (UID: "dcfb508b-ce02-4bc4-a362-b309ece5fd3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.776254 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.776313 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d597j\" (UniqueName: \"kubernetes.io/projected/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-kube-api-access-d597j\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.776337 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:37 crc kubenswrapper[4823]: I0126 15:08:37.776358 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcfb508b-ce02-4bc4-a362-b309ece5fd3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.171425 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nhdz4" event={"ID":"dcfb508b-ce02-4bc4-a362-b309ece5fd3c","Type":"ContainerDied","Data":"f478d6a652c5626dd9f65c7e1d045fa78f0d83ab410911f26aac5b980964edfd"} Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.171514 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f478d6a652c5626dd9f65c7e1d045fa78f0d83ab410911f26aac5b980964edfd" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.171577 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nhdz4" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.300540 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 15:08:38 crc kubenswrapper[4823]: E0126 15:08:38.301180 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcfb508b-ce02-4bc4-a362-b309ece5fd3c" containerName="nova-cell0-conductor-db-sync" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.301207 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcfb508b-ce02-4bc4-a362-b309ece5fd3c" containerName="nova-cell0-conductor-db-sync" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.301553 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcfb508b-ce02-4bc4-a362-b309ece5fd3c" containerName="nova-cell0-conductor-db-sync" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.302644 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.307018 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-c8vfx" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.307991 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.326180 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.388060 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5826af2c-a67e-4848-a374-794b8c905989-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5826af2c-a67e-4848-a374-794b8c905989\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.388172 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5826af2c-a67e-4848-a374-794b8c905989-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5826af2c-a67e-4848-a374-794b8c905989\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.388644 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rq8d\" (UniqueName: \"kubernetes.io/projected/5826af2c-a67e-4848-a374-794b8c905989-kube-api-access-2rq8d\") pod \"nova-cell0-conductor-0\" (UID: \"5826af2c-a67e-4848-a374-794b8c905989\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.491280 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5826af2c-a67e-4848-a374-794b8c905989-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5826af2c-a67e-4848-a374-794b8c905989\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.491427 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5826af2c-a67e-4848-a374-794b8c905989-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5826af2c-a67e-4848-a374-794b8c905989\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.491565 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rq8d\" (UniqueName: \"kubernetes.io/projected/5826af2c-a67e-4848-a374-794b8c905989-kube-api-access-2rq8d\") pod \"nova-cell0-conductor-0\" (UID: \"5826af2c-a67e-4848-a374-794b8c905989\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.496272 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5826af2c-a67e-4848-a374-794b8c905989-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5826af2c-a67e-4848-a374-794b8c905989\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.496708 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5826af2c-a67e-4848-a374-794b8c905989-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5826af2c-a67e-4848-a374-794b8c905989\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.519468 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rq8d\" (UniqueName: \"kubernetes.io/projected/5826af2c-a67e-4848-a374-794b8c905989-kube-api-access-2rq8d\") pod \"nova-cell0-conductor-0\" (UID: \"5826af2c-a67e-4848-a374-794b8c905989\") " pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:38 crc kubenswrapper[4823]: I0126 15:08:38.640050 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:39 crc kubenswrapper[4823]: I0126 15:08:39.125156 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 26 15:08:39 crc kubenswrapper[4823]: I0126 15:08:39.192993 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5826af2c-a67e-4848-a374-794b8c905989","Type":"ContainerStarted","Data":"170fc6238497c8b7d519c6ba8b931e33513583fc086bac03e1f222ce705059da"} Jan 26 15:08:40 crc kubenswrapper[4823]: I0126 15:08:40.213842 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5826af2c-a67e-4848-a374-794b8c905989","Type":"ContainerStarted","Data":"6d676a10eae299211a220b6b449377f19b30038d1eea683806001d48e2c40e11"} Jan 26 15:08:40 crc kubenswrapper[4823]: I0126 15:08:40.214754 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:40 crc kubenswrapper[4823]: I0126 15:08:40.252486 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.252458807 podStartE2EDuration="2.252458807s" podCreationTimestamp="2026-01-26 15:08:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:08:40.244841289 +0000 UTC m=+1316.930304414" watchObservedRunningTime="2026-01-26 15:08:40.252458807 +0000 UTC m=+1316.937921922" Jan 26 15:08:48 crc kubenswrapper[4823]: I0126 15:08:48.673123 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.248543 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-cfvtn"] Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.249891 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.258929 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.260577 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cfvtn"] Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.262596 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.329896 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-config-data\") pod \"nova-cell0-cell-mapping-cfvtn\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.330004 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-scripts\") pod \"nova-cell0-cell-mapping-cfvtn\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.330048 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cfvtn\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.330080 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8f4j\" (UniqueName: \"kubernetes.io/projected/b6d4313b-cd31-4952-8f17-0a5021c4adc3-kube-api-access-b8f4j\") pod \"nova-cell0-cell-mapping-cfvtn\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.432082 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-scripts\") pod \"nova-cell0-cell-mapping-cfvtn\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.432168 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cfvtn\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.432221 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8f4j\" (UniqueName: \"kubernetes.io/projected/b6d4313b-cd31-4952-8f17-0a5021c4adc3-kube-api-access-b8f4j\") pod \"nova-cell0-cell-mapping-cfvtn\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.432276 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-config-data\") pod \"nova-cell0-cell-mapping-cfvtn\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.436233 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.438072 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.440701 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-cfvtn\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.443250 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.446088 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-config-data\") pod \"nova-cell0-cell-mapping-cfvtn\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.446134 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-scripts\") pod \"nova-cell0-cell-mapping-cfvtn\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.460482 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.468954 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8f4j\" (UniqueName: \"kubernetes.io/projected/b6d4313b-cd31-4952-8f17-0a5021c4adc3-kube-api-access-b8f4j\") pod \"nova-cell0-cell-mapping-cfvtn\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.533811 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mrhd\" (UniqueName: \"kubernetes.io/projected/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-kube-api-access-4mrhd\") pod \"nova-api-0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.533869 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.533930 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-config-data\") pod \"nova-api-0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.533986 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-logs\") pod \"nova-api-0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.559651 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.560797 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.568584 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.582700 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.598854 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.724035 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.724181 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-config-data\") pod \"nova-api-0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.724238 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-logs\") pod \"nova-api-0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.724297 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mrhd\" (UniqueName: \"kubernetes.io/projected/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-kube-api-access-4mrhd\") pod \"nova-api-0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.732219 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.733935 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.735501 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-logs\") pod \"nova-api-0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.737874 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.757902 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-config-data\") pod \"nova-api-0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.759661 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.784269 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.788345 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mrhd\" (UniqueName: \"kubernetes.io/projected/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-kube-api-access-4mrhd\") pod \"nova-api-0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.826108 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7018514-b93c-40e0-a7af-63bb1055da22-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e7018514-b93c-40e0-a7af-63bb1055da22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.826525 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e43e93-d7ca-4837-94c3-d95a3de412c8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.826657 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e43e93-d7ca-4837-94c3-d95a3de412c8-config-data\") pod \"nova-metadata-0\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.826779 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e43e93-d7ca-4837-94c3-d95a3de412c8-logs\") pod \"nova-metadata-0\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.826857 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mrfp\" (UniqueName: \"kubernetes.io/projected/84e43e93-d7ca-4837-94c3-d95a3de412c8-kube-api-access-8mrfp\") pod \"nova-metadata-0\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.826940 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sn6j\" (UniqueName: \"kubernetes.io/projected/e7018514-b93c-40e0-a7af-63bb1055da22-kube-api-access-4sn6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"e7018514-b93c-40e0-a7af-63bb1055da22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.827046 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7018514-b93c-40e0-a7af-63bb1055da22-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e7018514-b93c-40e0-a7af-63bb1055da22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.846531 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.848463 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.849817 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.850715 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.898529 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.909541 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-bb6gn"] Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.911645 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.930543 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7018514-b93c-40e0-a7af-63bb1055da22-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e7018514-b93c-40e0-a7af-63bb1055da22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.930618 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e43e93-d7ca-4837-94c3-d95a3de412c8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.930661 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e43e93-d7ca-4837-94c3-d95a3de412c8-config-data\") pod \"nova-metadata-0\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.930699 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e43e93-d7ca-4837-94c3-d95a3de412c8-logs\") pod \"nova-metadata-0\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.930719 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mrfp\" (UniqueName: \"kubernetes.io/projected/84e43e93-d7ca-4837-94c3-d95a3de412c8-kube-api-access-8mrfp\") pod \"nova-metadata-0\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.930746 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sn6j\" (UniqueName: \"kubernetes.io/projected/e7018514-b93c-40e0-a7af-63bb1055da22-kube-api-access-4sn6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"e7018514-b93c-40e0-a7af-63bb1055da22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.930772 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7018514-b93c-40e0-a7af-63bb1055da22-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e7018514-b93c-40e0-a7af-63bb1055da22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.932813 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e43e93-d7ca-4837-94c3-d95a3de412c8-logs\") pod \"nova-metadata-0\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.938216 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7018514-b93c-40e0-a7af-63bb1055da22-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e7018514-b93c-40e0-a7af-63bb1055da22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.943144 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e43e93-d7ca-4837-94c3-d95a3de412c8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.948545 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7018514-b93c-40e0-a7af-63bb1055da22-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e7018514-b93c-40e0-a7af-63bb1055da22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.954920 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mrfp\" (UniqueName: \"kubernetes.io/projected/84e43e93-d7ca-4837-94c3-d95a3de412c8-kube-api-access-8mrfp\") pod \"nova-metadata-0\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.955657 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e43e93-d7ca-4837-94c3-d95a3de412c8-config-data\") pod \"nova-metadata-0\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.962955 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sn6j\" (UniqueName: \"kubernetes.io/projected/e7018514-b93c-40e0-a7af-63bb1055da22-kube-api-access-4sn6j\") pod \"nova-cell1-novncproxy-0\" (UID: \"e7018514-b93c-40e0-a7af-63bb1055da22\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:08:49 crc kubenswrapper[4823]: I0126 15:08:49.976828 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-bb6gn"] Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.034020 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.034204 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.034590 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\") " pod="openstack/nova-scheduler-0" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.034711 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.034770 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gdkj\" (UniqueName: \"kubernetes.io/projected/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-kube-api-access-6gdkj\") pod \"nova-scheduler-0\" (UID: \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\") " pod="openstack/nova-scheduler-0" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.034890 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j58wn\" (UniqueName: \"kubernetes.io/projected/eae4b870-d305-4b7f-8b9c-a30366d0123c-kube-api-access-j58wn\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.034940 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-config-data\") pod \"nova-scheduler-0\" (UID: \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\") " pod="openstack/nova-scheduler-0" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.035003 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-config\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.137868 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j58wn\" (UniqueName: \"kubernetes.io/projected/eae4b870-d305-4b7f-8b9c-a30366d0123c-kube-api-access-j58wn\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.138426 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-config-data\") pod \"nova-scheduler-0\" (UID: \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\") " pod="openstack/nova-scheduler-0" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.138475 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-config\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.138542 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.138584 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.138677 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\") " pod="openstack/nova-scheduler-0" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.138726 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.138765 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gdkj\" (UniqueName: \"kubernetes.io/projected/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-kube-api-access-6gdkj\") pod \"nova-scheduler-0\" (UID: \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\") " pod="openstack/nova-scheduler-0" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.143273 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.143926 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-config\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.144483 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.145138 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.146922 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\") " pod="openstack/nova-scheduler-0" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.149163 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-config-data\") pod \"nova-scheduler-0\" (UID: \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\") " pod="openstack/nova-scheduler-0" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.176658 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gdkj\" (UniqueName: \"kubernetes.io/projected/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-kube-api-access-6gdkj\") pod \"nova-scheduler-0\" (UID: \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\") " pod="openstack/nova-scheduler-0" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.178212 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j58wn\" (UniqueName: \"kubernetes.io/projected/eae4b870-d305-4b7f-8b9c-a30366d0123c-kube-api-access-j58wn\") pod \"dnsmasq-dns-8b8cf6657-bb6gn\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.183941 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.184421 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.202187 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.242158 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.445197 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-cfvtn"] Jan 26 15:08:50 crc kubenswrapper[4823]: W0126 15:08:50.491297 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6d4313b_cd31_4952_8f17_0a5021c4adc3.slice/crio-d3297196468202ba9e496ec3e1974879459af71ca6d7aab1b2e104cee772b8a7 WatchSource:0}: Error finding container d3297196468202ba9e496ec3e1974879459af71ca6d7aab1b2e104cee772b8a7: Status 404 returned error can't find the container with id d3297196468202ba9e496ec3e1974879459af71ca6d7aab1b2e104cee772b8a7 Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.561469 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.575801 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.661984 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ddcsn"] Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.665185 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.668409 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.668814 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.705690 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ddcsn"] Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.764270 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ddcsn\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.764476 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-scripts\") pod \"nova-cell1-conductor-db-sync-ddcsn\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.764512 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gczs\" (UniqueName: \"kubernetes.io/projected/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-kube-api-access-4gczs\") pod \"nova-cell1-conductor-db-sync-ddcsn\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.764569 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-config-data\") pod \"nova-cell1-conductor-db-sync-ddcsn\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.864576 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.865897 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-scripts\") pod \"nova-cell1-conductor-db-sync-ddcsn\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.865953 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gczs\" (UniqueName: \"kubernetes.io/projected/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-kube-api-access-4gczs\") pod \"nova-cell1-conductor-db-sync-ddcsn\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.866000 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-config-data\") pod \"nova-cell1-conductor-db-sync-ddcsn\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.866058 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ddcsn\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.873232 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-scripts\") pod \"nova-cell1-conductor-db-sync-ddcsn\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.875001 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-ddcsn\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:50 crc kubenswrapper[4823]: W0126 15:08:50.875649 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeae4b870_d305_4b7f_8b9c_a30366d0123c.slice/crio-6009b840b9d698daa02d442e16b89588a328d5629b4bdd83442e2ed2ec12c927 WatchSource:0}: Error finding container 6009b840b9d698daa02d442e16b89588a328d5629b4bdd83442e2ed2ec12c927: Status 404 returned error can't find the container with id 6009b840b9d698daa02d442e16b89588a328d5629b4bdd83442e2ed2ec12c927 Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.875655 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-config-data\") pod \"nova-cell1-conductor-db-sync-ddcsn\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.880800 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-bb6gn"] Jan 26 15:08:50 crc kubenswrapper[4823]: I0126 15:08:50.883140 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gczs\" (UniqueName: \"kubernetes.io/projected/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-kube-api-access-4gczs\") pod \"nova-cell1-conductor-db-sync-ddcsn\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:51 crc kubenswrapper[4823]: I0126 15:08:51.032952 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:08:51 crc kubenswrapper[4823]: I0126 15:08:51.056246 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:08:51 crc kubenswrapper[4823]: I0126 15:08:51.079086 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:08:51 crc kubenswrapper[4823]: W0126 15:08:51.086446 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84e43e93_d7ca_4837_94c3_d95a3de412c8.slice/crio-f0742eba593858135266a56042ee2e5f53ac4b94d0b3cfdd7b16c33ae2ac9a02 WatchSource:0}: Error finding container f0742eba593858135266a56042ee2e5f53ac4b94d0b3cfdd7b16c33ae2ac9a02: Status 404 returned error can't find the container with id f0742eba593858135266a56042ee2e5f53ac4b94d0b3cfdd7b16c33ae2ac9a02 Jan 26 15:08:51 crc kubenswrapper[4823]: I0126 15:08:51.387877 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cfvtn" event={"ID":"b6d4313b-cd31-4952-8f17-0a5021c4adc3","Type":"ContainerStarted","Data":"aa2c3fb280da60360c7695bb61cf0cf35ae2276aef775d0a6c0832363e1bdb40"} Jan 26 15:08:51 crc kubenswrapper[4823]: I0126 15:08:51.388300 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cfvtn" event={"ID":"b6d4313b-cd31-4952-8f17-0a5021c4adc3","Type":"ContainerStarted","Data":"d3297196468202ba9e496ec3e1974879459af71ca6d7aab1b2e104cee772b8a7"} Jan 26 15:08:51 crc kubenswrapper[4823]: I0126 15:08:51.388968 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a","Type":"ContainerStarted","Data":"316a30dccf15e289a7ca6c70f0cab16493ca3a10bcc4297d8466846292d0dd14"} Jan 26 15:08:51 crc kubenswrapper[4823]: I0126 15:08:51.390791 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e7018514-b93c-40e0-a7af-63bb1055da22","Type":"ContainerStarted","Data":"79f5f3ced3bcf3d8e6c10897f7f7604c22bc7c2cff844e2b3b5c366df2da7d96"} Jan 26 15:08:51 crc kubenswrapper[4823]: I0126 15:08:51.392146 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84e43e93-d7ca-4837-94c3-d95a3de412c8","Type":"ContainerStarted","Data":"f0742eba593858135266a56042ee2e5f53ac4b94d0b3cfdd7b16c33ae2ac9a02"} Jan 26 15:08:51 crc kubenswrapper[4823]: I0126 15:08:51.397070 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0","Type":"ContainerStarted","Data":"ae267cc955c605faf68505b96d23f966305d5689644c4908e7c645c056154444"} Jan 26 15:08:51 crc kubenswrapper[4823]: I0126 15:08:51.399628 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" event={"ID":"eae4b870-d305-4b7f-8b9c-a30366d0123c","Type":"ContainerStarted","Data":"6009b840b9d698daa02d442e16b89588a328d5629b4bdd83442e2ed2ec12c927"} Jan 26 15:08:51 crc kubenswrapper[4823]: I0126 15:08:51.420702 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-cfvtn" podStartSLOduration=2.420681387 podStartE2EDuration="2.420681387s" podCreationTimestamp="2026-01-26 15:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:08:51.410004485 +0000 UTC m=+1328.095467610" watchObservedRunningTime="2026-01-26 15:08:51.420681387 +0000 UTC m=+1328.106144492" Jan 26 15:08:51 crc kubenswrapper[4823]: I0126 15:08:51.634658 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ddcsn"] Jan 26 15:08:52 crc kubenswrapper[4823]: I0126 15:08:52.417581 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ddcsn" event={"ID":"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2","Type":"ContainerStarted","Data":"726164a2aa520369c61dc9c8f0a5763f054a1c63c9cb2ba7134adb34bd3f3356"} Jan 26 15:08:52 crc kubenswrapper[4823]: I0126 15:08:52.418016 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ddcsn" event={"ID":"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2","Type":"ContainerStarted","Data":"56267f0c1b908910ab38b905a661a179c90ef06a7c6192112b3ad754fb7df8c7"} Jan 26 15:08:52 crc kubenswrapper[4823]: I0126 15:08:52.434740 4823 generic.go:334] "Generic (PLEG): container finished" podID="eae4b870-d305-4b7f-8b9c-a30366d0123c" containerID="79e53c37b022bcfd4817f80ea92742c6d4235b833f3faf5f7aac56cabd6564e9" exitCode=0 Jan 26 15:08:52 crc kubenswrapper[4823]: I0126 15:08:52.435520 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" event={"ID":"eae4b870-d305-4b7f-8b9c-a30366d0123c","Type":"ContainerDied","Data":"79e53c37b022bcfd4817f80ea92742c6d4235b833f3faf5f7aac56cabd6564e9"} Jan 26 15:08:52 crc kubenswrapper[4823]: I0126 15:08:52.439128 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-ddcsn" podStartSLOduration=2.439098944 podStartE2EDuration="2.439098944s" podCreationTimestamp="2026-01-26 15:08:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:08:52.435866345 +0000 UTC m=+1329.121329450" watchObservedRunningTime="2026-01-26 15:08:52.439098944 +0000 UTC m=+1329.124562049" Jan 26 15:08:53 crc kubenswrapper[4823]: I0126 15:08:53.171077 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:08:53 crc kubenswrapper[4823]: I0126 15:08:53.226318 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:08:53 crc kubenswrapper[4823]: I0126 15:08:53.467080 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" event={"ID":"eae4b870-d305-4b7f-8b9c-a30366d0123c","Type":"ContainerStarted","Data":"882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15"} Jan 26 15:08:53 crc kubenswrapper[4823]: I0126 15:08:53.491000 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" podStartSLOduration=4.490979544 podStartE2EDuration="4.490979544s" podCreationTimestamp="2026-01-26 15:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:08:53.49046702 +0000 UTC m=+1330.175930125" watchObservedRunningTime="2026-01-26 15:08:53.490979544 +0000 UTC m=+1330.176442649" Jan 26 15:08:54 crc kubenswrapper[4823]: I0126 15:08:54.490145 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.510114 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e7018514-b93c-40e0-a7af-63bb1055da22","Type":"ContainerStarted","Data":"b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca"} Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.510194 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e7018514-b93c-40e0-a7af-63bb1055da22" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca" gracePeriod=30 Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.515341 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84e43e93-d7ca-4837-94c3-d95a3de412c8","Type":"ContainerStarted","Data":"3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b"} Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.515425 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84e43e93-d7ca-4837-94c3-d95a3de412c8","Type":"ContainerStarted","Data":"66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99"} Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.515454 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="84e43e93-d7ca-4837-94c3-d95a3de412c8" containerName="nova-metadata-log" containerID="cri-o://66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99" gracePeriod=30 Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.515572 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="84e43e93-d7ca-4837-94c3-d95a3de412c8" containerName="nova-metadata-metadata" containerID="cri-o://3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b" gracePeriod=30 Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.519128 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0","Type":"ContainerStarted","Data":"1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7"} Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.519188 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0","Type":"ContainerStarted","Data":"d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094"} Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.526744 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a","Type":"ContainerStarted","Data":"cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742"} Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.543964 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.311917078 podStartE2EDuration="7.54393893s" podCreationTimestamp="2026-01-26 15:08:49 +0000 UTC" firstStartedPulling="2026-01-26 15:08:50.870080963 +0000 UTC m=+1327.555544068" lastFinishedPulling="2026-01-26 15:08:55.102102815 +0000 UTC m=+1331.787565920" observedRunningTime="2026-01-26 15:08:56.532650052 +0000 UTC m=+1333.218113177" watchObservedRunningTime="2026-01-26 15:08:56.54393893 +0000 UTC m=+1333.229402035" Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.578467 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.05284156 podStartE2EDuration="7.578440273s" podCreationTimestamp="2026-01-26 15:08:49 +0000 UTC" firstStartedPulling="2026-01-26 15:08:50.575545426 +0000 UTC m=+1327.261008531" lastFinishedPulling="2026-01-26 15:08:55.101144139 +0000 UTC m=+1331.786607244" observedRunningTime="2026-01-26 15:08:56.567556256 +0000 UTC m=+1333.253019371" watchObservedRunningTime="2026-01-26 15:08:56.578440273 +0000 UTC m=+1333.263903398" Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.614046 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.591894528 podStartE2EDuration="7.614010635s" podCreationTimestamp="2026-01-26 15:08:49 +0000 UTC" firstStartedPulling="2026-01-26 15:08:51.077747027 +0000 UTC m=+1327.763210132" lastFinishedPulling="2026-01-26 15:08:55.099863124 +0000 UTC m=+1331.785326239" observedRunningTime="2026-01-26 15:08:56.609745009 +0000 UTC m=+1333.295208114" watchObservedRunningTime="2026-01-26 15:08:56.614010635 +0000 UTC m=+1333.299473750" Jan 26 15:08:56 crc kubenswrapper[4823]: I0126 15:08:56.658398 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.64063004 podStartE2EDuration="7.658354147s" podCreationTimestamp="2026-01-26 15:08:49 +0000 UTC" firstStartedPulling="2026-01-26 15:08:51.094144775 +0000 UTC m=+1327.779607880" lastFinishedPulling="2026-01-26 15:08:55.111868882 +0000 UTC m=+1331.797331987" observedRunningTime="2026-01-26 15:08:56.649045613 +0000 UTC m=+1333.334508718" watchObservedRunningTime="2026-01-26 15:08:56.658354147 +0000 UTC m=+1333.343817242" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.142765 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.276689 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mrfp\" (UniqueName: \"kubernetes.io/projected/84e43e93-d7ca-4837-94c3-d95a3de412c8-kube-api-access-8mrfp\") pod \"84e43e93-d7ca-4837-94c3-d95a3de412c8\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.276779 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e43e93-d7ca-4837-94c3-d95a3de412c8-logs\") pod \"84e43e93-d7ca-4837-94c3-d95a3de412c8\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.276963 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e43e93-d7ca-4837-94c3-d95a3de412c8-combined-ca-bundle\") pod \"84e43e93-d7ca-4837-94c3-d95a3de412c8\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.277016 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e43e93-d7ca-4837-94c3-d95a3de412c8-config-data\") pod \"84e43e93-d7ca-4837-94c3-d95a3de412c8\" (UID: \"84e43e93-d7ca-4837-94c3-d95a3de412c8\") " Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.278815 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84e43e93-d7ca-4837-94c3-d95a3de412c8-logs" (OuterVolumeSpecName: "logs") pod "84e43e93-d7ca-4837-94c3-d95a3de412c8" (UID: "84e43e93-d7ca-4837-94c3-d95a3de412c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.285676 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e43e93-d7ca-4837-94c3-d95a3de412c8-kube-api-access-8mrfp" (OuterVolumeSpecName: "kube-api-access-8mrfp") pod "84e43e93-d7ca-4837-94c3-d95a3de412c8" (UID: "84e43e93-d7ca-4837-94c3-d95a3de412c8"). InnerVolumeSpecName "kube-api-access-8mrfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.317603 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e43e93-d7ca-4837-94c3-d95a3de412c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84e43e93-d7ca-4837-94c3-d95a3de412c8" (UID: "84e43e93-d7ca-4837-94c3-d95a3de412c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.368600 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e43e93-d7ca-4837-94c3-d95a3de412c8-config-data" (OuterVolumeSpecName: "config-data") pod "84e43e93-d7ca-4837-94c3-d95a3de412c8" (UID: "84e43e93-d7ca-4837-94c3-d95a3de412c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.380232 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84e43e93-d7ca-4837-94c3-d95a3de412c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.380455 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84e43e93-d7ca-4837-94c3-d95a3de412c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.380569 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mrfp\" (UniqueName: \"kubernetes.io/projected/84e43e93-d7ca-4837-94c3-d95a3de412c8-kube-api-access-8mrfp\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.380663 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84e43e93-d7ca-4837-94c3-d95a3de412c8-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.539140 4823 generic.go:334] "Generic (PLEG): container finished" podID="84e43e93-d7ca-4837-94c3-d95a3de412c8" containerID="3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b" exitCode=0 Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.539176 4823 generic.go:334] "Generic (PLEG): container finished" podID="84e43e93-d7ca-4837-94c3-d95a3de412c8" containerID="66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99" exitCode=143 Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.540202 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.546712 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84e43e93-d7ca-4837-94c3-d95a3de412c8","Type":"ContainerDied","Data":"3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b"} Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.547006 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84e43e93-d7ca-4837-94c3-d95a3de412c8","Type":"ContainerDied","Data":"66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99"} Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.547120 4823 scope.go:117] "RemoveContainer" containerID="3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.547289 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"84e43e93-d7ca-4837-94c3-d95a3de412c8","Type":"ContainerDied","Data":"f0742eba593858135266a56042ee2e5f53ac4b94d0b3cfdd7b16c33ae2ac9a02"} Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.586999 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.589213 4823 scope.go:117] "RemoveContainer" containerID="66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.596507 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.611219 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:08:57 crc kubenswrapper[4823]: E0126 15:08:57.611810 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e43e93-d7ca-4837-94c3-d95a3de412c8" containerName="nova-metadata-log" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.611834 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e43e93-d7ca-4837-94c3-d95a3de412c8" containerName="nova-metadata-log" Jan 26 15:08:57 crc kubenswrapper[4823]: E0126 15:08:57.611881 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e43e93-d7ca-4837-94c3-d95a3de412c8" containerName="nova-metadata-metadata" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.611892 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e43e93-d7ca-4837-94c3-d95a3de412c8" containerName="nova-metadata-metadata" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.612105 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e43e93-d7ca-4837-94c3-d95a3de412c8" containerName="nova-metadata-metadata" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.612164 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e43e93-d7ca-4837-94c3-d95a3de412c8" containerName="nova-metadata-log" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.613549 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.618655 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.619106 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.629975 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.683337 4823 scope.go:117] "RemoveContainer" containerID="3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b" Jan 26 15:08:57 crc kubenswrapper[4823]: E0126 15:08:57.684311 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b\": container with ID starting with 3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b not found: ID does not exist" containerID="3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.684478 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b"} err="failed to get container status \"3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b\": rpc error: code = NotFound desc = could not find container \"3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b\": container with ID starting with 3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b not found: ID does not exist" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.684522 4823 scope.go:117] "RemoveContainer" containerID="66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99" Jan 26 15:08:57 crc kubenswrapper[4823]: E0126 15:08:57.685011 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99\": container with ID starting with 66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99 not found: ID does not exist" containerID="66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.685126 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99"} err="failed to get container status \"66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99\": rpc error: code = NotFound desc = could not find container \"66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99\": container with ID starting with 66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99 not found: ID does not exist" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.685176 4823 scope.go:117] "RemoveContainer" containerID="3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.685513 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b"} err="failed to get container status \"3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b\": rpc error: code = NotFound desc = could not find container \"3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b\": container with ID starting with 3e1b01aeca7863b4a519eca46acdd370e22bc9e87d2f009d42572fbd17e4318b not found: ID does not exist" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.685566 4823 scope.go:117] "RemoveContainer" containerID="66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.686317 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99"} err="failed to get container status \"66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99\": rpc error: code = NotFound desc = could not find container \"66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99\": container with ID starting with 66da81804fbc82eccd97ff8361c516e84504abb5764a31b9a3fb8b3bd6b57d99 not found: ID does not exist" Jan 26 15:08:57 crc kubenswrapper[4823]: E0126 15:08:57.744320 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84e43e93_d7ca_4837_94c3_d95a3de412c8.slice\": RecentStats: unable to find data in memory cache]" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.788732 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv5f2\" (UniqueName: \"kubernetes.io/projected/7b86fa42-d746-4a53-8af8-534446268fb8-kube-api-access-bv5f2\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.788798 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.788993 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-config-data\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.789180 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.789409 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b86fa42-d746-4a53-8af8-534446268fb8-logs\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.890783 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b86fa42-d746-4a53-8af8-534446268fb8-logs\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.890893 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv5f2\" (UniqueName: \"kubernetes.io/projected/7b86fa42-d746-4a53-8af8-534446268fb8-kube-api-access-bv5f2\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.890923 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.890973 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-config-data\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.891037 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.892811 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b86fa42-d746-4a53-8af8-534446268fb8-logs\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.896476 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-config-data\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.896903 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.896951 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.912777 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv5f2\" (UniqueName: \"kubernetes.io/projected/7b86fa42-d746-4a53-8af8-534446268fb8-kube-api-access-bv5f2\") pod \"nova-metadata-0\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " pod="openstack/nova-metadata-0" Jan 26 15:08:57 crc kubenswrapper[4823]: I0126 15:08:57.983108 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:08:58 crc kubenswrapper[4823]: I0126 15:08:58.438010 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 15:08:58 crc kubenswrapper[4823]: I0126 15:08:58.619904 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:08:59 crc kubenswrapper[4823]: I0126 15:08:59.655082 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e43e93-d7ca-4837-94c3-d95a3de412c8" path="/var/lib/kubelet/pods/84e43e93-d7ca-4837-94c3-d95a3de412c8/volumes" Jan 26 15:08:59 crc kubenswrapper[4823]: I0126 15:08:59.671264 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b86fa42-d746-4a53-8af8-534446268fb8","Type":"ContainerStarted","Data":"4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7"} Jan 26 15:08:59 crc kubenswrapper[4823]: I0126 15:08:59.671650 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b86fa42-d746-4a53-8af8-534446268fb8","Type":"ContainerStarted","Data":"781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d"} Jan 26 15:08:59 crc kubenswrapper[4823]: I0126 15:08:59.671731 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b86fa42-d746-4a53-8af8-534446268fb8","Type":"ContainerStarted","Data":"e641065de10794e86e74faad26b6c98594fefdf9af81d901b75750a527ea401d"} Jan 26 15:08:59 crc kubenswrapper[4823]: I0126 15:08:59.691713 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.691688527 podStartE2EDuration="2.691688527s" podCreationTimestamp="2026-01-26 15:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:08:59.69071929 +0000 UTC m=+1336.376182395" watchObservedRunningTime="2026-01-26 15:08:59.691688527 +0000 UTC m=+1336.377151642" Jan 26 15:08:59 crc kubenswrapper[4823]: I0126 15:08:59.851335 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:08:59 crc kubenswrapper[4823]: I0126 15:08:59.851825 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.184994 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.202740 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.202821 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.245078 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.245924 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.347152 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-x2bgk"] Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.347493 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" podUID="726afeb0-ed38-4d62-ad73-c0379a57f547" containerName="dnsmasq-dns" containerID="cri-o://26f087c98a6eb40df28a85d4ef864ea5191bae88c6133cb412138d310c620805" gracePeriod=10 Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.759681 4823 generic.go:334] "Generic (PLEG): container finished" podID="726afeb0-ed38-4d62-ad73-c0379a57f547" containerID="26f087c98a6eb40df28a85d4ef864ea5191bae88c6133cb412138d310c620805" exitCode=0 Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.760200 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" event={"ID":"726afeb0-ed38-4d62-ad73-c0379a57f547","Type":"ContainerDied","Data":"26f087c98a6eb40df28a85d4ef864ea5191bae88c6133cb412138d310c620805"} Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.764694 4823 generic.go:334] "Generic (PLEG): container finished" podID="56d63df0-04ad-4cab-b8ae-e6cbb09c28e2" containerID="726164a2aa520369c61dc9c8f0a5763f054a1c63c9cb2ba7134adb34bd3f3356" exitCode=0 Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.764876 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ddcsn" event={"ID":"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2","Type":"ContainerDied","Data":"726164a2aa520369c61dc9c8f0a5763f054a1c63c9cb2ba7134adb34bd3f3356"} Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.826204 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.918756 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.934696 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.174:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 15:09:00 crc kubenswrapper[4823]: I0126 15:09:00.934707 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.174:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.098069 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-config\") pod \"726afeb0-ed38-4d62-ad73-c0379a57f547\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.098174 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-dns-svc\") pod \"726afeb0-ed38-4d62-ad73-c0379a57f547\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.098340 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pth7\" (UniqueName: \"kubernetes.io/projected/726afeb0-ed38-4d62-ad73-c0379a57f547-kube-api-access-7pth7\") pod \"726afeb0-ed38-4d62-ad73-c0379a57f547\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.098396 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-ovsdbserver-sb\") pod \"726afeb0-ed38-4d62-ad73-c0379a57f547\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.098654 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-ovsdbserver-nb\") pod \"726afeb0-ed38-4d62-ad73-c0379a57f547\" (UID: \"726afeb0-ed38-4d62-ad73-c0379a57f547\") " Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.108711 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/726afeb0-ed38-4d62-ad73-c0379a57f547-kube-api-access-7pth7" (OuterVolumeSpecName: "kube-api-access-7pth7") pod "726afeb0-ed38-4d62-ad73-c0379a57f547" (UID: "726afeb0-ed38-4d62-ad73-c0379a57f547"). InnerVolumeSpecName "kube-api-access-7pth7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.157442 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "726afeb0-ed38-4d62-ad73-c0379a57f547" (UID: "726afeb0-ed38-4d62-ad73-c0379a57f547"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.182192 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-config" (OuterVolumeSpecName: "config") pod "726afeb0-ed38-4d62-ad73-c0379a57f547" (UID: "726afeb0-ed38-4d62-ad73-c0379a57f547"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.196599 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "726afeb0-ed38-4d62-ad73-c0379a57f547" (UID: "726afeb0-ed38-4d62-ad73-c0379a57f547"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.202036 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.202087 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pth7\" (UniqueName: \"kubernetes.io/projected/726afeb0-ed38-4d62-ad73-c0379a57f547-kube-api-access-7pth7\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.202104 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.202130 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.220388 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "726afeb0-ed38-4d62-ad73-c0379a57f547" (UID: "726afeb0-ed38-4d62-ad73-c0379a57f547"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.303983 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/726afeb0-ed38-4d62-ad73-c0379a57f547-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.775495 4823 generic.go:334] "Generic (PLEG): container finished" podID="b6d4313b-cd31-4952-8f17-0a5021c4adc3" containerID="aa2c3fb280da60360c7695bb61cf0cf35ae2276aef775d0a6c0832363e1bdb40" exitCode=0 Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.775576 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cfvtn" event={"ID":"b6d4313b-cd31-4952-8f17-0a5021c4adc3","Type":"ContainerDied","Data":"aa2c3fb280da60360c7695bb61cf0cf35ae2276aef775d0a6c0832363e1bdb40"} Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.778330 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" event={"ID":"726afeb0-ed38-4d62-ad73-c0379a57f547","Type":"ContainerDied","Data":"855595e2ecb63009fca3cff844640f87d928c4d5e4e4f6b300deb1b9b0b78b55"} Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.778399 4823 scope.go:117] "RemoveContainer" containerID="26f087c98a6eb40df28a85d4ef864ea5191bae88c6133cb412138d310c620805" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.778422 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-x2bgk" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.829175 4823 scope.go:117] "RemoveContainer" containerID="4fd2489bb2f81324a275b2ac6f3d7d00c7a3a379fe4efbacd52e958072146a40" Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.852945 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-x2bgk"] Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.865108 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.865404 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="19474244-0d03-4e7f-8a6d-abd64aafaff9" containerName="kube-state-metrics" containerID="cri-o://6e82e9deb99af2b3b870a2ed2a53db407735ae46e99c77c22fa41e3ca8b9f407" gracePeriod=30 Jan 26 15:09:01 crc kubenswrapper[4823]: I0126 15:09:01.874094 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-x2bgk"] Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.201412 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.324353 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-combined-ca-bundle\") pod \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.324454 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-config-data\") pod \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.324496 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-scripts\") pod \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.324741 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gczs\" (UniqueName: \"kubernetes.io/projected/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-kube-api-access-4gczs\") pod \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\" (UID: \"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2\") " Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.331796 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-kube-api-access-4gczs" (OuterVolumeSpecName: "kube-api-access-4gczs") pod "56d63df0-04ad-4cab-b8ae-e6cbb09c28e2" (UID: "56d63df0-04ad-4cab-b8ae-e6cbb09c28e2"). InnerVolumeSpecName "kube-api-access-4gczs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.337468 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-scripts" (OuterVolumeSpecName: "scripts") pod "56d63df0-04ad-4cab-b8ae-e6cbb09c28e2" (UID: "56d63df0-04ad-4cab-b8ae-e6cbb09c28e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.381520 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56d63df0-04ad-4cab-b8ae-e6cbb09c28e2" (UID: "56d63df0-04ad-4cab-b8ae-e6cbb09c28e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.390461 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-config-data" (OuterVolumeSpecName: "config-data") pod "56d63df0-04ad-4cab-b8ae-e6cbb09c28e2" (UID: "56d63df0-04ad-4cab-b8ae-e6cbb09c28e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.427045 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gczs\" (UniqueName: \"kubernetes.io/projected/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-kube-api-access-4gczs\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.427086 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.427097 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.427105 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.792485 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-ddcsn" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.793461 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-ddcsn" event={"ID":"56d63df0-04ad-4cab-b8ae-e6cbb09c28e2","Type":"ContainerDied","Data":"56267f0c1b908910ab38b905a661a179c90ef06a7c6192112b3ad754fb7df8c7"} Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.793525 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56267f0c1b908910ab38b905a661a179c90ef06a7c6192112b3ad754fb7df8c7" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.800769 4823 generic.go:334] "Generic (PLEG): container finished" podID="19474244-0d03-4e7f-8a6d-abd64aafaff9" containerID="6e82e9deb99af2b3b870a2ed2a53db407735ae46e99c77c22fa41e3ca8b9f407" exitCode=2 Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.800823 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"19474244-0d03-4e7f-8a6d-abd64aafaff9","Type":"ContainerDied","Data":"6e82e9deb99af2b3b870a2ed2a53db407735ae46e99c77c22fa41e3ca8b9f407"} Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.800897 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"19474244-0d03-4e7f-8a6d-abd64aafaff9","Type":"ContainerDied","Data":"902940ade0e4f7212b0380281e6e2742ec87cc5c503ddb3e658e20c5d0439150"} Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.800910 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="902940ade0e4f7212b0380281e6e2742ec87cc5c503ddb3e658e20c5d0439150" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.836466 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.928201 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 15:09:02 crc kubenswrapper[4823]: E0126 15:09:02.928821 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d63df0-04ad-4cab-b8ae-e6cbb09c28e2" containerName="nova-cell1-conductor-db-sync" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.928842 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d63df0-04ad-4cab-b8ae-e6cbb09c28e2" containerName="nova-cell1-conductor-db-sync" Jan 26 15:09:02 crc kubenswrapper[4823]: E0126 15:09:02.928864 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726afeb0-ed38-4d62-ad73-c0379a57f547" containerName="dnsmasq-dns" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.928871 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="726afeb0-ed38-4d62-ad73-c0379a57f547" containerName="dnsmasq-dns" Jan 26 15:09:02 crc kubenswrapper[4823]: E0126 15:09:02.928897 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="726afeb0-ed38-4d62-ad73-c0379a57f547" containerName="init" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.928905 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="726afeb0-ed38-4d62-ad73-c0379a57f547" containerName="init" Jan 26 15:09:02 crc kubenswrapper[4823]: E0126 15:09:02.928922 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19474244-0d03-4e7f-8a6d-abd64aafaff9" containerName="kube-state-metrics" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.928929 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="19474244-0d03-4e7f-8a6d-abd64aafaff9" containerName="kube-state-metrics" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.929173 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="19474244-0d03-4e7f-8a6d-abd64aafaff9" containerName="kube-state-metrics" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.929198 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d63df0-04ad-4cab-b8ae-e6cbb09c28e2" containerName="nova-cell1-conductor-db-sync" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.929216 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="726afeb0-ed38-4d62-ad73-c0379a57f547" containerName="dnsmasq-dns" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.930037 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.934754 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.935489 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpcpt\" (UniqueName: \"kubernetes.io/projected/19474244-0d03-4e7f-8a6d-abd64aafaff9-kube-api-access-lpcpt\") pod \"19474244-0d03-4e7f-8a6d-abd64aafaff9\" (UID: \"19474244-0d03-4e7f-8a6d-abd64aafaff9\") " Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.942066 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.949736 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19474244-0d03-4e7f-8a6d-abd64aafaff9-kube-api-access-lpcpt" (OuterVolumeSpecName: "kube-api-access-lpcpt") pod "19474244-0d03-4e7f-8a6d-abd64aafaff9" (UID: "19474244-0d03-4e7f-8a6d-abd64aafaff9"). InnerVolumeSpecName "kube-api-access-lpcpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.984192 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 15:09:02 crc kubenswrapper[4823]: I0126 15:09:02.985229 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.038598 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8d7965-2086-497c-aa14-b8922c56fc65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1b8d7965-2086-497c-aa14-b8922c56fc65\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.038904 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5k4d\" (UniqueName: \"kubernetes.io/projected/1b8d7965-2086-497c-aa14-b8922c56fc65-kube-api-access-c5k4d\") pod \"nova-cell1-conductor-0\" (UID: \"1b8d7965-2086-497c-aa14-b8922c56fc65\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.039120 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8d7965-2086-497c-aa14-b8922c56fc65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1b8d7965-2086-497c-aa14-b8922c56fc65\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.039252 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpcpt\" (UniqueName: \"kubernetes.io/projected/19474244-0d03-4e7f-8a6d-abd64aafaff9-kube-api-access-lpcpt\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.103197 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.103521 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="ceilometer-central-agent" containerID="cri-o://927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f" gracePeriod=30 Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.103599 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="proxy-httpd" containerID="cri-o://e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b" gracePeriod=30 Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.103649 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="ceilometer-notification-agent" containerID="cri-o://96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d" gracePeriod=30 Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.103649 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="sg-core" containerID="cri-o://7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa" gracePeriod=30 Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.141290 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8d7965-2086-497c-aa14-b8922c56fc65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1b8d7965-2086-497c-aa14-b8922c56fc65\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.141458 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8d7965-2086-497c-aa14-b8922c56fc65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1b8d7965-2086-497c-aa14-b8922c56fc65\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.141501 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5k4d\" (UniqueName: \"kubernetes.io/projected/1b8d7965-2086-497c-aa14-b8922c56fc65-kube-api-access-c5k4d\") pod \"nova-cell1-conductor-0\" (UID: \"1b8d7965-2086-497c-aa14-b8922c56fc65\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.150408 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b8d7965-2086-497c-aa14-b8922c56fc65-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"1b8d7965-2086-497c-aa14-b8922c56fc65\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.150433 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b8d7965-2086-497c-aa14-b8922c56fc65-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"1b8d7965-2086-497c-aa14-b8922c56fc65\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.162023 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5k4d\" (UniqueName: \"kubernetes.io/projected/1b8d7965-2086-497c-aa14-b8922c56fc65-kube-api-access-c5k4d\") pod \"nova-cell1-conductor-0\" (UID: \"1b8d7965-2086-497c-aa14-b8922c56fc65\") " pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.226860 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.308846 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.348154 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8f4j\" (UniqueName: \"kubernetes.io/projected/b6d4313b-cd31-4952-8f17-0a5021c4adc3-kube-api-access-b8f4j\") pod \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.348224 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-combined-ca-bundle\") pod \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.348286 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-config-data\") pod \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.348384 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-scripts\") pod \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\" (UID: \"b6d4313b-cd31-4952-8f17-0a5021c4adc3\") " Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.352969 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d4313b-cd31-4952-8f17-0a5021c4adc3-kube-api-access-b8f4j" (OuterVolumeSpecName: "kube-api-access-b8f4j") pod "b6d4313b-cd31-4952-8f17-0a5021c4adc3" (UID: "b6d4313b-cd31-4952-8f17-0a5021c4adc3"). InnerVolumeSpecName "kube-api-access-b8f4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.354305 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-scripts" (OuterVolumeSpecName: "scripts") pod "b6d4313b-cd31-4952-8f17-0a5021c4adc3" (UID: "b6d4313b-cd31-4952-8f17-0a5021c4adc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.375230 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6d4313b-cd31-4952-8f17-0a5021c4adc3" (UID: "b6d4313b-cd31-4952-8f17-0a5021c4adc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.387540 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-config-data" (OuterVolumeSpecName: "config-data") pod "b6d4313b-cd31-4952-8f17-0a5021c4adc3" (UID: "b6d4313b-cd31-4952-8f17-0a5021c4adc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.458112 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.458689 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8f4j\" (UniqueName: \"kubernetes.io/projected/b6d4313b-cd31-4952-8f17-0a5021c4adc3-kube-api-access-b8f4j\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.458717 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.458738 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6d4313b-cd31-4952-8f17-0a5021c4adc3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.578961 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="726afeb0-ed38-4d62-ad73-c0379a57f547" path="/var/lib/kubelet/pods/726afeb0-ed38-4d62-ad73-c0379a57f547/volumes" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.811597 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-cfvtn" event={"ID":"b6d4313b-cd31-4952-8f17-0a5021c4adc3","Type":"ContainerDied","Data":"d3297196468202ba9e496ec3e1974879459af71ca6d7aab1b2e104cee772b8a7"} Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.811650 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3297196468202ba9e496ec3e1974879459af71ca6d7aab1b2e104cee772b8a7" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.811721 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-cfvtn" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.818148 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerID="e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b" exitCode=0 Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.818189 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerID="7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa" exitCode=2 Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.818205 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerID="927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f" exitCode=0 Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.818250 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac87bc60-d424-40d1-913b-14d363dc5b1b","Type":"ContainerDied","Data":"e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b"} Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.818294 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac87bc60-d424-40d1-913b-14d363dc5b1b","Type":"ContainerDied","Data":"7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa"} Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.818315 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac87bc60-d424-40d1-913b-14d363dc5b1b","Type":"ContainerDied","Data":"927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f"} Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.818562 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.843979 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.864551 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:09:03 crc kubenswrapper[4823]: W0126 15:09:03.866419 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1b8d7965_2086_497c_aa14_b8922c56fc65.slice/crio-670cf469655d3e1b0f5a0fa883ab113bc90aab4fd270046e990c73414961ef10 WatchSource:0}: Error finding container 670cf469655d3e1b0f5a0fa883ab113bc90aab4fd270046e990c73414961ef10: Status 404 returned error can't find the container with id 670cf469655d3e1b0f5a0fa883ab113bc90aab4fd270046e990c73414961ef10 Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.881173 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.912968 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:09:03 crc kubenswrapper[4823]: E0126 15:09:03.913669 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d4313b-cd31-4952-8f17-0a5021c4adc3" containerName="nova-manage" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.913699 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d4313b-cd31-4952-8f17-0a5021c4adc3" containerName="nova-manage" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.913931 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d4313b-cd31-4952-8f17-0a5021c4adc3" containerName="nova-manage" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.914863 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.919325 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.919598 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 26 15:09:03 crc kubenswrapper[4823]: I0126 15:09:03.920989 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.074025 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/28e0835b-8ae8-4732-883a-65766b6c38a7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"28e0835b-8ae8-4732-883a-65766b6c38a7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.074090 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6nq7\" (UniqueName: \"kubernetes.io/projected/28e0835b-8ae8-4732-883a-65766b6c38a7-kube-api-access-t6nq7\") pod \"kube-state-metrics-0\" (UID: \"28e0835b-8ae8-4732-883a-65766b6c38a7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.074125 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e0835b-8ae8-4732-883a-65766b6c38a7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"28e0835b-8ae8-4732-883a-65766b6c38a7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.074161 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/28e0835b-8ae8-4732-883a-65766b6c38a7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"28e0835b-8ae8-4732-883a-65766b6c38a7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.136214 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.136560 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" containerName="nova-api-log" containerID="cri-o://d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094" gracePeriod=30 Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.136642 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" containerName="nova-api-api" containerID="cri-o://1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7" gracePeriod=30 Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.152944 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.153309 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="fa81ae7f-1dd2-4405-a0f4-388f3883ef7a" containerName="nova-scheduler-scheduler" containerID="cri-o://cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742" gracePeriod=30 Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.163939 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.175347 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6nq7\" (UniqueName: \"kubernetes.io/projected/28e0835b-8ae8-4732-883a-65766b6c38a7-kube-api-access-t6nq7\") pod \"kube-state-metrics-0\" (UID: \"28e0835b-8ae8-4732-883a-65766b6c38a7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.175504 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e0835b-8ae8-4732-883a-65766b6c38a7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"28e0835b-8ae8-4732-883a-65766b6c38a7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.175551 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/28e0835b-8ae8-4732-883a-65766b6c38a7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"28e0835b-8ae8-4732-883a-65766b6c38a7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.175696 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/28e0835b-8ae8-4732-883a-65766b6c38a7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"28e0835b-8ae8-4732-883a-65766b6c38a7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.180859 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/28e0835b-8ae8-4732-883a-65766b6c38a7-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"28e0835b-8ae8-4732-883a-65766b6c38a7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.182541 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/28e0835b-8ae8-4732-883a-65766b6c38a7-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"28e0835b-8ae8-4732-883a-65766b6c38a7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.183946 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/28e0835b-8ae8-4732-883a-65766b6c38a7-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"28e0835b-8ae8-4732-883a-65766b6c38a7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.193047 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6nq7\" (UniqueName: \"kubernetes.io/projected/28e0835b-8ae8-4732-883a-65766b6c38a7-kube-api-access-t6nq7\") pod \"kube-state-metrics-0\" (UID: \"28e0835b-8ae8-4732-883a-65766b6c38a7\") " pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.297651 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.811337 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.831655 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1b8d7965-2086-497c-aa14-b8922c56fc65","Type":"ContainerStarted","Data":"87f4662995d4a32f32a33c4e50781a1ad8b31bbb61272dfa1d823001ddaa7102"} Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.831769 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.831785 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"1b8d7965-2086-497c-aa14-b8922c56fc65","Type":"ContainerStarted","Data":"670cf469655d3e1b0f5a0fa883ab113bc90aab4fd270046e990c73414961ef10"} Jan 26 15:09:04 crc kubenswrapper[4823]: W0126 15:09:04.843601 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28e0835b_8ae8_4732_883a_65766b6c38a7.slice/crio-e323bea99a4f02b4de22f8e5113951f961d78626d624ee895f242b3ac0e9e6db WatchSource:0}: Error finding container e323bea99a4f02b4de22f8e5113951f961d78626d624ee895f242b3ac0e9e6db: Status 404 returned error can't find the container with id e323bea99a4f02b4de22f8e5113951f961d78626d624ee895f242b3ac0e9e6db Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.844034 4823 generic.go:334] "Generic (PLEG): container finished" podID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" containerID="d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094" exitCode=143 Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.844109 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0","Type":"ContainerDied","Data":"d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094"} Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.844308 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7b86fa42-d746-4a53-8af8-534446268fb8" containerName="nova-metadata-log" containerID="cri-o://781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d" gracePeriod=30 Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.844394 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="7b86fa42-d746-4a53-8af8-534446268fb8" containerName="nova-metadata-metadata" containerID="cri-o://4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7" gracePeriod=30 Jan 26 15:09:04 crc kubenswrapper[4823]: I0126 15:09:04.862139 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.862115818 podStartE2EDuration="2.862115818s" podCreationTimestamp="2026-01-26 15:09:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:09:04.854375278 +0000 UTC m=+1341.539838383" watchObservedRunningTime="2026-01-26 15:09:04.862115818 +0000 UTC m=+1341.547578923" Jan 26 15:09:05 crc kubenswrapper[4823]: E0126 15:09:05.204697 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 15:09:05 crc kubenswrapper[4823]: E0126 15:09:05.206508 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 15:09:05 crc kubenswrapper[4823]: E0126 15:09:05.223271 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 15:09:05 crc kubenswrapper[4823]: E0126 15:09:05.223344 4823 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="fa81ae7f-1dd2-4405-a0f4-388f3883ef7a" containerName="nova-scheduler-scheduler" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.404734 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.505387 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv5f2\" (UniqueName: \"kubernetes.io/projected/7b86fa42-d746-4a53-8af8-534446268fb8-kube-api-access-bv5f2\") pod \"7b86fa42-d746-4a53-8af8-534446268fb8\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.505518 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b86fa42-d746-4a53-8af8-534446268fb8-logs\") pod \"7b86fa42-d746-4a53-8af8-534446268fb8\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.505538 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-combined-ca-bundle\") pod \"7b86fa42-d746-4a53-8af8-534446268fb8\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.505574 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-nova-metadata-tls-certs\") pod \"7b86fa42-d746-4a53-8af8-534446268fb8\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.505606 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-config-data\") pod \"7b86fa42-d746-4a53-8af8-534446268fb8\" (UID: \"7b86fa42-d746-4a53-8af8-534446268fb8\") " Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.506222 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b86fa42-d746-4a53-8af8-534446268fb8-logs" (OuterVolumeSpecName: "logs") pod "7b86fa42-d746-4a53-8af8-534446268fb8" (UID: "7b86fa42-d746-4a53-8af8-534446268fb8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.510982 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b86fa42-d746-4a53-8af8-534446268fb8-kube-api-access-bv5f2" (OuterVolumeSpecName: "kube-api-access-bv5f2") pod "7b86fa42-d746-4a53-8af8-534446268fb8" (UID: "7b86fa42-d746-4a53-8af8-534446268fb8"). InnerVolumeSpecName "kube-api-access-bv5f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.533602 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b86fa42-d746-4a53-8af8-534446268fb8" (UID: "7b86fa42-d746-4a53-8af8-534446268fb8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.541777 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-config-data" (OuterVolumeSpecName: "config-data") pod "7b86fa42-d746-4a53-8af8-534446268fb8" (UID: "7b86fa42-d746-4a53-8af8-534446268fb8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.557490 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "7b86fa42-d746-4a53-8af8-534446268fb8" (UID: "7b86fa42-d746-4a53-8af8-534446268fb8"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.570854 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19474244-0d03-4e7f-8a6d-abd64aafaff9" path="/var/lib/kubelet/pods/19474244-0d03-4e7f-8a6d-abd64aafaff9/volumes" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.607728 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7b86fa42-d746-4a53-8af8-534446268fb8-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.607769 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.607780 4823 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.607792 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b86fa42-d746-4a53-8af8-534446268fb8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.607802 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv5f2\" (UniqueName: \"kubernetes.io/projected/7b86fa42-d746-4a53-8af8-534446268fb8-kube-api-access-bv5f2\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.855627 4823 generic.go:334] "Generic (PLEG): container finished" podID="7b86fa42-d746-4a53-8af8-534446268fb8" containerID="4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7" exitCode=0 Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.857070 4823 generic.go:334] "Generic (PLEG): container finished" podID="7b86fa42-d746-4a53-8af8-534446268fb8" containerID="781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d" exitCode=143 Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.855725 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.855762 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b86fa42-d746-4a53-8af8-534446268fb8","Type":"ContainerDied","Data":"4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7"} Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.857308 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b86fa42-d746-4a53-8af8-534446268fb8","Type":"ContainerDied","Data":"781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d"} Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.857333 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7b86fa42-d746-4a53-8af8-534446268fb8","Type":"ContainerDied","Data":"e641065de10794e86e74faad26b6c98594fefdf9af81d901b75750a527ea401d"} Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.857352 4823 scope.go:117] "RemoveContainer" containerID="4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.859841 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"28e0835b-8ae8-4732-883a-65766b6c38a7","Type":"ContainerStarted","Data":"9d25f874e193db9b65b08ca7659348654ec65e1b0de4f78e9f49fa0cfe2940db"} Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.860223 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"28e0835b-8ae8-4732-883a-65766b6c38a7","Type":"ContainerStarted","Data":"e323bea99a4f02b4de22f8e5113951f961d78626d624ee895f242b3ac0e9e6db"} Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.888956 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.535508667 podStartE2EDuration="2.888935055s" podCreationTimestamp="2026-01-26 15:09:03 +0000 UTC" firstStartedPulling="2026-01-26 15:09:04.845498304 +0000 UTC m=+1341.530961409" lastFinishedPulling="2026-01-26 15:09:05.198924692 +0000 UTC m=+1341.884387797" observedRunningTime="2026-01-26 15:09:05.881411169 +0000 UTC m=+1342.566874284" watchObservedRunningTime="2026-01-26 15:09:05.888935055 +0000 UTC m=+1342.574398160" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.889406 4823 scope.go:117] "RemoveContainer" containerID="781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.911423 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.916170 4823 scope.go:117] "RemoveContainer" containerID="4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7" Jan 26 15:09:05 crc kubenswrapper[4823]: E0126 15:09:05.916890 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7\": container with ID starting with 4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7 not found: ID does not exist" containerID="4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.916955 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7"} err="failed to get container status \"4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7\": rpc error: code = NotFound desc = could not find container \"4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7\": container with ID starting with 4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7 not found: ID does not exist" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.916983 4823 scope.go:117] "RemoveContainer" containerID="781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d" Jan 26 15:09:05 crc kubenswrapper[4823]: E0126 15:09:05.917315 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d\": container with ID starting with 781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d not found: ID does not exist" containerID="781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.917333 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d"} err="failed to get container status \"781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d\": rpc error: code = NotFound desc = could not find container \"781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d\": container with ID starting with 781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d not found: ID does not exist" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.917346 4823 scope.go:117] "RemoveContainer" containerID="4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.917543 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7"} err="failed to get container status \"4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7\": rpc error: code = NotFound desc = could not find container \"4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7\": container with ID starting with 4c1cf81dec9d9e7a4be29456584bacdef584f25018138c5eba894246ee3ac5a7 not found: ID does not exist" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.917556 4823 scope.go:117] "RemoveContainer" containerID="781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.917726 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d"} err="failed to get container status \"781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d\": rpc error: code = NotFound desc = could not find container \"781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d\": container with ID starting with 781487c94942ea53e6db2c345b81955ae1a4d2d5a32a27aa5fa5423444b5427d not found: ID does not exist" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.919157 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.938977 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:09:05 crc kubenswrapper[4823]: E0126 15:09:05.939627 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b86fa42-d746-4a53-8af8-534446268fb8" containerName="nova-metadata-metadata" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.939660 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b86fa42-d746-4a53-8af8-534446268fb8" containerName="nova-metadata-metadata" Jan 26 15:09:05 crc kubenswrapper[4823]: E0126 15:09:05.939686 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b86fa42-d746-4a53-8af8-534446268fb8" containerName="nova-metadata-log" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.939698 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b86fa42-d746-4a53-8af8-534446268fb8" containerName="nova-metadata-log" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.939939 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b86fa42-d746-4a53-8af8-534446268fb8" containerName="nova-metadata-log" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.939978 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b86fa42-d746-4a53-8af8-534446268fb8" containerName="nova-metadata-metadata" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.941261 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.944525 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.944546 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 15:09:05 crc kubenswrapper[4823]: I0126 15:09:05.956871 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.116392 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.116479 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-config-data\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.116502 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvgbn\" (UniqueName: \"kubernetes.io/projected/193cf951-14a6-4175-95a9-e832702f5576-kube-api-access-xvgbn\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.116526 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.116581 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/193cf951-14a6-4175-95a9-e832702f5576-logs\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.218709 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.218797 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-config-data\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.219785 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvgbn\" (UniqueName: \"kubernetes.io/projected/193cf951-14a6-4175-95a9-e832702f5576-kube-api-access-xvgbn\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.219815 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.220048 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/193cf951-14a6-4175-95a9-e832702f5576-logs\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.220530 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/193cf951-14a6-4175-95a9-e832702f5576-logs\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.224420 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.224805 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.233311 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-config-data\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.251589 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvgbn\" (UniqueName: \"kubernetes.io/projected/193cf951-14a6-4175-95a9-e832702f5576-kube-api-access-xvgbn\") pod \"nova-metadata-0\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.279722 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.824299 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.872201 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"193cf951-14a6-4175-95a9-e832702f5576","Type":"ContainerStarted","Data":"01304225a42d5141174c91d5baa13e85d3d47c864a48815aa86bfccba76bf1b5"} Jan 26 15:09:06 crc kubenswrapper[4823]: I0126 15:09:06.874053 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.308868 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.445419 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.446123 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac87bc60-d424-40d1-913b-14d363dc5b1b-run-httpd\") pod \"ac87bc60-d424-40d1-913b-14d363dc5b1b\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.446211 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-scripts\") pod \"ac87bc60-d424-40d1-913b-14d363dc5b1b\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.446260 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-sg-core-conf-yaml\") pod \"ac87bc60-d424-40d1-913b-14d363dc5b1b\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.446404 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac87bc60-d424-40d1-913b-14d363dc5b1b-log-httpd\") pod \"ac87bc60-d424-40d1-913b-14d363dc5b1b\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.446438 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f54wn\" (UniqueName: \"kubernetes.io/projected/ac87bc60-d424-40d1-913b-14d363dc5b1b-kube-api-access-f54wn\") pod \"ac87bc60-d424-40d1-913b-14d363dc5b1b\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.446549 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-config-data\") pod \"ac87bc60-d424-40d1-913b-14d363dc5b1b\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.446605 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-combined-ca-bundle\") pod \"ac87bc60-d424-40d1-913b-14d363dc5b1b\" (UID: \"ac87bc60-d424-40d1-913b-14d363dc5b1b\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.446814 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac87bc60-d424-40d1-913b-14d363dc5b1b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ac87bc60-d424-40d1-913b-14d363dc5b1b" (UID: "ac87bc60-d424-40d1-913b-14d363dc5b1b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.447136 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac87bc60-d424-40d1-913b-14d363dc5b1b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ac87bc60-d424-40d1-913b-14d363dc5b1b" (UID: "ac87bc60-d424-40d1-913b-14d363dc5b1b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.448398 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac87bc60-d424-40d1-913b-14d363dc5b1b-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.448424 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ac87bc60-d424-40d1-913b-14d363dc5b1b-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.452969 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-scripts" (OuterVolumeSpecName: "scripts") pod "ac87bc60-d424-40d1-913b-14d363dc5b1b" (UID: "ac87bc60-d424-40d1-913b-14d363dc5b1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.453610 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac87bc60-d424-40d1-913b-14d363dc5b1b-kube-api-access-f54wn" (OuterVolumeSpecName: "kube-api-access-f54wn") pod "ac87bc60-d424-40d1-913b-14d363dc5b1b" (UID: "ac87bc60-d424-40d1-913b-14d363dc5b1b"). InnerVolumeSpecName "kube-api-access-f54wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.493967 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ac87bc60-d424-40d1-913b-14d363dc5b1b" (UID: "ac87bc60-d424-40d1-913b-14d363dc5b1b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.542158 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac87bc60-d424-40d1-913b-14d363dc5b1b" (UID: "ac87bc60-d424-40d1-913b-14d363dc5b1b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.550235 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-config-data\") pod \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\" (UID: \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.550390 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gdkj\" (UniqueName: \"kubernetes.io/projected/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-kube-api-access-6gdkj\") pod \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\" (UID: \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.550453 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-combined-ca-bundle\") pod \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\" (UID: \"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.550935 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f54wn\" (UniqueName: \"kubernetes.io/projected/ac87bc60-d424-40d1-913b-14d363dc5b1b-kube-api-access-f54wn\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.550948 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.550978 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.551013 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.558741 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-kube-api-access-6gdkj" (OuterVolumeSpecName: "kube-api-access-6gdkj") pod "fa81ae7f-1dd2-4405-a0f4-388f3883ef7a" (UID: "fa81ae7f-1dd2-4405-a0f4-388f3883ef7a"). InnerVolumeSpecName "kube-api-access-6gdkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.583212 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b86fa42-d746-4a53-8af8-534446268fb8" path="/var/lib/kubelet/pods/7b86fa42-d746-4a53-8af8-534446268fb8/volumes" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.601529 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-config-data" (OuterVolumeSpecName: "config-data") pod "fa81ae7f-1dd2-4405-a0f4-388f3883ef7a" (UID: "fa81ae7f-1dd2-4405-a0f4-388f3883ef7a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.608531 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-config-data" (OuterVolumeSpecName: "config-data") pod "ac87bc60-d424-40d1-913b-14d363dc5b1b" (UID: "ac87bc60-d424-40d1-913b-14d363dc5b1b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.621194 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa81ae7f-1dd2-4405-a0f4-388f3883ef7a" (UID: "fa81ae7f-1dd2-4405-a0f4-388f3883ef7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.686829 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.686874 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gdkj\" (UniqueName: \"kubernetes.io/projected/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-kube-api-access-6gdkj\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.686889 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac87bc60-d424-40d1-913b-14d363dc5b1b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.686903 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.837661 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.900001 4823 generic.go:334] "Generic (PLEG): container finished" podID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerID="96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d" exitCode=0 Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.900093 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac87bc60-d424-40d1-913b-14d363dc5b1b","Type":"ContainerDied","Data":"96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d"} Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.900139 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.900162 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ac87bc60-d424-40d1-913b-14d363dc5b1b","Type":"ContainerDied","Data":"351b62b812a717b97a6f6d5b2ab0ae87780eaa267f9b41a2117c966f8dc84846"} Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.900183 4823 scope.go:117] "RemoveContainer" containerID="e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.902708 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"193cf951-14a6-4175-95a9-e832702f5576","Type":"ContainerStarted","Data":"de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2"} Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.902743 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"193cf951-14a6-4175-95a9-e832702f5576","Type":"ContainerStarted","Data":"44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59"} Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.907915 4823 generic.go:334] "Generic (PLEG): container finished" podID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" containerID="1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7" exitCode=0 Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.907964 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.907972 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0","Type":"ContainerDied","Data":"1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7"} Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.907991 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0","Type":"ContainerDied","Data":"ae267cc955c605faf68505b96d23f966305d5689644c4908e7c645c056154444"} Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.911552 4823 generic.go:334] "Generic (PLEG): container finished" podID="fa81ae7f-1dd2-4405-a0f4-388f3883ef7a" containerID="cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742" exitCode=0 Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.911587 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.911823 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a","Type":"ContainerDied","Data":"cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742"} Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.911905 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"fa81ae7f-1dd2-4405-a0f4-388f3883ef7a","Type":"ContainerDied","Data":"316a30dccf15e289a7ca6c70f0cab16493ca3a10bcc4297d8466846292d0dd14"} Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.930445 4823 scope.go:117] "RemoveContainer" containerID="7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.952642 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.9526160409999997 podStartE2EDuration="2.952616041s" podCreationTimestamp="2026-01-26 15:09:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:09:07.930493797 +0000 UTC m=+1344.615956922" watchObservedRunningTime="2026-01-26 15:09:07.952616041 +0000 UTC m=+1344.638079146" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.957820 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.968056 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.983049 4823 scope.go:117] "RemoveContainer" containerID="96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.983050 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.991695 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mrhd\" (UniqueName: \"kubernetes.io/projected/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-kube-api-access-4mrhd\") pod \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.991765 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-logs\") pod \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.991818 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-config-data\") pod \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.991863 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-combined-ca-bundle\") pod \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\" (UID: \"0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0\") " Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.992506 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-logs" (OuterVolumeSpecName: "logs") pod "0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" (UID: "0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:09:07 crc kubenswrapper[4823]: I0126 15:09:07.995879 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.004386 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-kube-api-access-4mrhd" (OuterVolumeSpecName: "kube-api-access-4mrhd") pod "0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" (UID: "0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0"). InnerVolumeSpecName "kube-api-access-4mrhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.010329 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.010754 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="ceilometer-notification-agent" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.010777 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="ceilometer-notification-agent" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.010812 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" containerName="nova-api-log" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.010823 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" containerName="nova-api-log" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.010832 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="ceilometer-central-agent" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.010842 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="ceilometer-central-agent" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.010854 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" containerName="nova-api-api" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.010860 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" containerName="nova-api-api" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.010875 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="sg-core" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.010881 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="sg-core" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.010887 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa81ae7f-1dd2-4405-a0f4-388f3883ef7a" containerName="nova-scheduler-scheduler" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.010893 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa81ae7f-1dd2-4405-a0f4-388f3883ef7a" containerName="nova-scheduler-scheduler" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.010904 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="proxy-httpd" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.010910 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="proxy-httpd" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.011062 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa81ae7f-1dd2-4405-a0f4-388f3883ef7a" containerName="nova-scheduler-scheduler" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.011085 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" containerName="nova-api-api" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.011095 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" containerName="nova-api-log" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.011107 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="ceilometer-notification-agent" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.011119 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="proxy-httpd" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.011132 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="ceilometer-central-agent" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.011143 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" containerName="sg-core" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.011873 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.025384 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.031029 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.033732 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac87bc60_d424_40d1_913b_14d363dc5b1b.slice\": RecentStats: unable to find data in memory cache]" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.043111 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.045243 4823 scope.go:117] "RemoveContainer" containerID="927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.045483 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.050415 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.051789 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.051929 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.058904 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-config-data" (OuterVolumeSpecName: "config-data") pod "0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" (UID: "0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.059706 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.060572 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" (UID: "0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.090689 4823 scope.go:117] "RemoveContainer" containerID="e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.091177 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b\": container with ID starting with e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b not found: ID does not exist" containerID="e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.091231 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b"} err="failed to get container status \"e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b\": rpc error: code = NotFound desc = could not find container \"e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b\": container with ID starting with e5345735f0112c5a8a8c032ba5a39c05e47203da439047034c3654a080af912b not found: ID does not exist" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.091293 4823 scope.go:117] "RemoveContainer" containerID="7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.092230 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa\": container with ID starting with 7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa not found: ID does not exist" containerID="7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.092355 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa"} err="failed to get container status \"7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa\": rpc error: code = NotFound desc = could not find container \"7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa\": container with ID starting with 7dc97c115b9fa0cde5d78b7ea070b1a80ac75baec775594f4302a1b399f54afa not found: ID does not exist" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.092402 4823 scope.go:117] "RemoveContainer" containerID="96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.092962 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d\": container with ID starting with 96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d not found: ID does not exist" containerID="96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.093000 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d"} err="failed to get container status \"96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d\": rpc error: code = NotFound desc = could not find container \"96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d\": container with ID starting with 96e01f05777c7921f26b3c48a90b5a3a2ac7ea9886a8543d44fc073728228d5d not found: ID does not exist" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.093027 4823 scope.go:117] "RemoveContainer" containerID="927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.093452 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f\": container with ID starting with 927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f not found: ID does not exist" containerID="927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.093502 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f"} err="failed to get container status \"927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f\": rpc error: code = NotFound desc = could not find container \"927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f\": container with ID starting with 927487ce84072a4c2c4f5131562e536ea45d099be2b0215cc7b8626f00a85b4f not found: ID does not exist" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.093531 4823 scope.go:117] "RemoveContainer" containerID="1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.093891 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mrhd\" (UniqueName: \"kubernetes.io/projected/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-kube-api-access-4mrhd\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.093947 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.093959 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.093968 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.117000 4823 scope.go:117] "RemoveContainer" containerID="d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.140420 4823 scope.go:117] "RemoveContainer" containerID="1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.141035 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7\": container with ID starting with 1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7 not found: ID does not exist" containerID="1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.141102 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7"} err="failed to get container status \"1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7\": rpc error: code = NotFound desc = could not find container \"1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7\": container with ID starting with 1d927d20ca314fa6b55844d4f429bdfbf25df4b588b2fb5f6994caf76ec35ab7 not found: ID does not exist" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.141146 4823 scope.go:117] "RemoveContainer" containerID="d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.141700 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094\": container with ID starting with d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094 not found: ID does not exist" containerID="d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.141764 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094"} err="failed to get container status \"d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094\": rpc error: code = NotFound desc = could not find container \"d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094\": container with ID starting with d197aaaf45e3d7b754f5eb81f9d433aef07f2a020a4d645b81fd199184ba7094 not found: ID does not exist" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.141825 4823 scope.go:117] "RemoveContainer" containerID="cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.163160 4823 scope.go:117] "RemoveContainer" containerID="cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742" Jan 26 15:09:08 crc kubenswrapper[4823]: E0126 15:09:08.163818 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742\": container with ID starting with cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742 not found: ID does not exist" containerID="cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.163882 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742"} err="failed to get container status \"cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742\": rpc error: code = NotFound desc = could not find container \"cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742\": container with ID starting with cdff725f22aa8b98e7b531d3d4cfb96ead01ed3527638ed3586c063886fe9742 not found: ID does not exist" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.196008 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f3c039-ed21-4a16-a877-757cfff7e8b9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"56f3c039-ed21-4a16-a877-757cfff7e8b9\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.196090 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-run-httpd\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.196121 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f3c039-ed21-4a16-a877-757cfff7e8b9-config-data\") pod \"nova-scheduler-0\" (UID: \"56f3c039-ed21-4a16-a877-757cfff7e8b9\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.196181 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87stq\" (UniqueName: \"kubernetes.io/projected/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-kube-api-access-87stq\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.196223 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-log-httpd\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.196243 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.196280 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.196417 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-scripts\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.196514 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4nr9\" (UniqueName: \"kubernetes.io/projected/56f3c039-ed21-4a16-a877-757cfff7e8b9-kube-api-access-t4nr9\") pod \"nova-scheduler-0\" (UID: \"56f3c039-ed21-4a16-a877-757cfff7e8b9\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.196767 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.196866 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-config-data\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.275649 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.287855 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.298140 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f3c039-ed21-4a16-a877-757cfff7e8b9-config-data\") pod \"nova-scheduler-0\" (UID: \"56f3c039-ed21-4a16-a877-757cfff7e8b9\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.298246 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87stq\" (UniqueName: \"kubernetes.io/projected/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-kube-api-access-87stq\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.298283 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-log-httpd\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.298308 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.298345 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.298392 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-scripts\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.298441 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4nr9\" (UniqueName: \"kubernetes.io/projected/56f3c039-ed21-4a16-a877-757cfff7e8b9-kube-api-access-t4nr9\") pod \"nova-scheduler-0\" (UID: \"56f3c039-ed21-4a16-a877-757cfff7e8b9\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.298468 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.298505 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-config-data\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.298523 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f3c039-ed21-4a16-a877-757cfff7e8b9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"56f3c039-ed21-4a16-a877-757cfff7e8b9\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.298545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-run-httpd\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.299001 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-run-httpd\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.300946 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-log-httpd\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.308633 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.310542 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f3c039-ed21-4a16-a877-757cfff7e8b9-config-data\") pod \"nova-scheduler-0\" (UID: \"56f3c039-ed21-4a16-a877-757cfff7e8b9\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.312953 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-scripts\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.314061 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.315288 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.317411 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f3c039-ed21-4a16-a877-757cfff7e8b9-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"56f3c039-ed21-4a16-a877-757cfff7e8b9\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.329692 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.331193 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.331868 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87stq\" (UniqueName: \"kubernetes.io/projected/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-kube-api-access-87stq\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.334452 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-config-data\") pod \"ceilometer-0\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.335547 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.339548 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4nr9\" (UniqueName: \"kubernetes.io/projected/56f3c039-ed21-4a16-a877-757cfff7e8b9-kube-api-access-t4nr9\") pod \"nova-scheduler-0\" (UID: \"56f3c039-ed21-4a16-a877-757cfff7e8b9\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.358728 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.387887 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.396431 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.505335 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6zjl\" (UniqueName: \"kubernetes.io/projected/86f25b75-4164-483d-a1cd-6d6509b8e5cd-kube-api-access-w6zjl\") pod \"nova-api-0\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.505429 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f25b75-4164-483d-a1cd-6d6509b8e5cd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.505496 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86f25b75-4164-483d-a1cd-6d6509b8e5cd-logs\") pod \"nova-api-0\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.505525 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f25b75-4164-483d-a1cd-6d6509b8e5cd-config-data\") pod \"nova-api-0\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.607106 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86f25b75-4164-483d-a1cd-6d6509b8e5cd-logs\") pod \"nova-api-0\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.607554 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f25b75-4164-483d-a1cd-6d6509b8e5cd-config-data\") pod \"nova-api-0\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.607621 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6zjl\" (UniqueName: \"kubernetes.io/projected/86f25b75-4164-483d-a1cd-6d6509b8e5cd-kube-api-access-w6zjl\") pod \"nova-api-0\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.607659 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f25b75-4164-483d-a1cd-6d6509b8e5cd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.607839 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86f25b75-4164-483d-a1cd-6d6509b8e5cd-logs\") pod \"nova-api-0\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.624276 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f25b75-4164-483d-a1cd-6d6509b8e5cd-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.633693 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6zjl\" (UniqueName: \"kubernetes.io/projected/86f25b75-4164-483d-a1cd-6d6509b8e5cd-kube-api-access-w6zjl\") pod \"nova-api-0\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.639343 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f25b75-4164-483d-a1cd-6d6509b8e5cd-config-data\") pod \"nova-api-0\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " pod="openstack/nova-api-0" Jan 26 15:09:08 crc kubenswrapper[4823]: I0126 15:09:08.655272 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.074434 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:09:09 crc kubenswrapper[4823]: W0126 15:09:09.076348 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56f3c039_ed21_4a16_a877_757cfff7e8b9.slice/crio-7ec29ac2d65082c904cae7d6038c5214e411b1e7d084971aa153d8c446e43847 WatchSource:0}: Error finding container 7ec29ac2d65082c904cae7d6038c5214e411b1e7d084971aa153d8c446e43847: Status 404 returned error can't find the container with id 7ec29ac2d65082c904cae7d6038c5214e411b1e7d084971aa153d8c446e43847 Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.086212 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.192028 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:09 crc kubenswrapper[4823]: W0126 15:09:09.210004 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86f25b75_4164_483d_a1cd_6d6509b8e5cd.slice/crio-ec3b30c886b0112ec411b4c3e31da6b18432e577c15e34aeb44ae04d4e5b201e WatchSource:0}: Error finding container ec3b30c886b0112ec411b4c3e31da6b18432e577c15e34aeb44ae04d4e5b201e: Status 404 returned error can't find the container with id ec3b30c886b0112ec411b4c3e31da6b18432e577c15e34aeb44ae04d4e5b201e Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.575152 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0" path="/var/lib/kubelet/pods/0978cfe0-0a36-4a6d-9e94-a0e7c96b38b0/volumes" Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.581705 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac87bc60-d424-40d1-913b-14d363dc5b1b" path="/var/lib/kubelet/pods/ac87bc60-d424-40d1-913b-14d363dc5b1b/volumes" Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.591680 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa81ae7f-1dd2-4405-a0f4-388f3883ef7a" path="/var/lib/kubelet/pods/fa81ae7f-1dd2-4405-a0f4-388f3883ef7a/volumes" Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.946195 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1","Type":"ContainerStarted","Data":"cde1a29c598e44fc76a130c81c27b928a84535d210fd0c0ff32f5348af822f7e"} Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.946585 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1","Type":"ContainerStarted","Data":"6df652e47e24455211a28a899c6d2ecd6c29cb5e6e4cdbcb884dfa784e64eee4"} Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.948441 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"86f25b75-4164-483d-a1cd-6d6509b8e5cd","Type":"ContainerStarted","Data":"760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c"} Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.948471 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"86f25b75-4164-483d-a1cd-6d6509b8e5cd","Type":"ContainerStarted","Data":"508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21"} Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.948482 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"86f25b75-4164-483d-a1cd-6d6509b8e5cd","Type":"ContainerStarted","Data":"ec3b30c886b0112ec411b4c3e31da6b18432e577c15e34aeb44ae04d4e5b201e"} Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.950682 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"56f3c039-ed21-4a16-a877-757cfff7e8b9","Type":"ContainerStarted","Data":"da1cefe1163f457fd2e50604f0809f5046e7e32a4eaa9a58b4e6bcb63c371c88"} Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.950704 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"56f3c039-ed21-4a16-a877-757cfff7e8b9","Type":"ContainerStarted","Data":"7ec29ac2d65082c904cae7d6038c5214e411b1e7d084971aa153d8c446e43847"} Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.971822 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.971804371 podStartE2EDuration="1.971804371s" podCreationTimestamp="2026-01-26 15:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:09:09.963926626 +0000 UTC m=+1346.649389751" watchObservedRunningTime="2026-01-26 15:09:09.971804371 +0000 UTC m=+1346.657267476" Jan 26 15:09:09 crc kubenswrapper[4823]: I0126 15:09:09.988648 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.98861538 podStartE2EDuration="2.98861538s" podCreationTimestamp="2026-01-26 15:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:09:09.978973117 +0000 UTC m=+1346.664436222" watchObservedRunningTime="2026-01-26 15:09:09.98861538 +0000 UTC m=+1346.674078525" Jan 26 15:09:10 crc kubenswrapper[4823]: I0126 15:09:10.978140 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1","Type":"ContainerStarted","Data":"b1888ae309c5e47266ffd77f00566216f5a43516e2f5aa50ae1ac3e332438286"} Jan 26 15:09:11 crc kubenswrapper[4823]: I0126 15:09:11.281204 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 15:09:11 crc kubenswrapper[4823]: I0126 15:09:11.281258 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 15:09:11 crc kubenswrapper[4823]: I0126 15:09:11.991402 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1","Type":"ContainerStarted","Data":"d8bbdb93ca6bbf8d37a10cc2bf7a18cd91f51c6150ccd80d6bea82987f7b277e"} Jan 26 15:09:13 crc kubenswrapper[4823]: I0126 15:09:13.006949 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1","Type":"ContainerStarted","Data":"f2cda8971e978483477f0708eade457f12374c2903ab0fd7c1f19bf932a40f3e"} Jan 26 15:09:13 crc kubenswrapper[4823]: I0126 15:09:13.007687 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:09:13 crc kubenswrapper[4823]: I0126 15:09:13.033527 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.847195406 podStartE2EDuration="6.033501046s" podCreationTimestamp="2026-01-26 15:09:07 +0000 UTC" firstStartedPulling="2026-01-26 15:09:09.088878697 +0000 UTC m=+1345.774341802" lastFinishedPulling="2026-01-26 15:09:12.275184337 +0000 UTC m=+1348.960647442" observedRunningTime="2026-01-26 15:09:13.02743999 +0000 UTC m=+1349.712903105" watchObservedRunningTime="2026-01-26 15:09:13.033501046 +0000 UTC m=+1349.718964151" Jan 26 15:09:13 crc kubenswrapper[4823]: I0126 15:09:13.344157 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 26 15:09:13 crc kubenswrapper[4823]: I0126 15:09:13.365660 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 15:09:14 crc kubenswrapper[4823]: I0126 15:09:14.323395 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 26 15:09:16 crc kubenswrapper[4823]: I0126 15:09:16.282280 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 15:09:16 crc kubenswrapper[4823]: I0126 15:09:16.282351 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 15:09:17 crc kubenswrapper[4823]: I0126 15:09:17.329542 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="193cf951-14a6-4175-95a9-e832702f5576" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:09:17 crc kubenswrapper[4823]: I0126 15:09:17.329583 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="193cf951-14a6-4175-95a9-e832702f5576" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:09:18 crc kubenswrapper[4823]: I0126 15:09:18.420412 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 15:09:18 crc kubenswrapper[4823]: I0126 15:09:18.457626 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 15:09:18 crc kubenswrapper[4823]: I0126 15:09:18.655652 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:09:18 crc kubenswrapper[4823]: I0126 15:09:18.655719 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:09:19 crc kubenswrapper[4823]: I0126 15:09:19.142251 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 15:09:19 crc kubenswrapper[4823]: I0126 15:09:19.697592 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 15:09:19 crc kubenswrapper[4823]: I0126 15:09:19.738709 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.186:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 15:09:26 crc kubenswrapper[4823]: I0126 15:09:26.287897 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 15:09:26 crc kubenswrapper[4823]: I0126 15:09:26.291293 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 15:09:26 crc kubenswrapper[4823]: I0126 15:09:26.296155 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 15:09:26 crc kubenswrapper[4823]: I0126 15:09:26.985179 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.021573 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sn6j\" (UniqueName: \"kubernetes.io/projected/e7018514-b93c-40e0-a7af-63bb1055da22-kube-api-access-4sn6j\") pod \"e7018514-b93c-40e0-a7af-63bb1055da22\" (UID: \"e7018514-b93c-40e0-a7af-63bb1055da22\") " Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.021662 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7018514-b93c-40e0-a7af-63bb1055da22-combined-ca-bundle\") pod \"e7018514-b93c-40e0-a7af-63bb1055da22\" (UID: \"e7018514-b93c-40e0-a7af-63bb1055da22\") " Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.021783 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7018514-b93c-40e0-a7af-63bb1055da22-config-data\") pod \"e7018514-b93c-40e0-a7af-63bb1055da22\" (UID: \"e7018514-b93c-40e0-a7af-63bb1055da22\") " Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.029571 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7018514-b93c-40e0-a7af-63bb1055da22-kube-api-access-4sn6j" (OuterVolumeSpecName: "kube-api-access-4sn6j") pod "e7018514-b93c-40e0-a7af-63bb1055da22" (UID: "e7018514-b93c-40e0-a7af-63bb1055da22"). InnerVolumeSpecName "kube-api-access-4sn6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.050883 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7018514-b93c-40e0-a7af-63bb1055da22-config-data" (OuterVolumeSpecName: "config-data") pod "e7018514-b93c-40e0-a7af-63bb1055da22" (UID: "e7018514-b93c-40e0-a7af-63bb1055da22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.052995 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7018514-b93c-40e0-a7af-63bb1055da22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7018514-b93c-40e0-a7af-63bb1055da22" (UID: "e7018514-b93c-40e0-a7af-63bb1055da22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.122914 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7018514-b93c-40e0-a7af-63bb1055da22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.122952 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7018514-b93c-40e0-a7af-63bb1055da22-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.122964 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sn6j\" (UniqueName: \"kubernetes.io/projected/e7018514-b93c-40e0-a7af-63bb1055da22-kube-api-access-4sn6j\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.178194 4823 generic.go:334] "Generic (PLEG): container finished" podID="e7018514-b93c-40e0-a7af-63bb1055da22" containerID="b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca" exitCode=137 Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.178250 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e7018514-b93c-40e0-a7af-63bb1055da22","Type":"ContainerDied","Data":"b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca"} Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.178312 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.178345 4823 scope.go:117] "RemoveContainer" containerID="b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.178327 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e7018514-b93c-40e0-a7af-63bb1055da22","Type":"ContainerDied","Data":"79f5f3ced3bcf3d8e6c10897f7f7604c22bc7c2cff844e2b3b5c366df2da7d96"} Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.184483 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.214097 4823 scope.go:117] "RemoveContainer" containerID="b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca" Jan 26 15:09:27 crc kubenswrapper[4823]: E0126 15:09:27.214968 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca\": container with ID starting with b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca not found: ID does not exist" containerID="b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.215026 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca"} err="failed to get container status \"b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca\": rpc error: code = NotFound desc = could not find container \"b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca\": container with ID starting with b40fd50b5aed8ec53bb518fb5ea198c9be9bd7cccc2e25bf0e05d684803ea3ca not found: ID does not exist" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.239914 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.263320 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.362473 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:09:27 crc kubenswrapper[4823]: E0126 15:09:27.363267 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7018514-b93c-40e0-a7af-63bb1055da22" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.363290 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7018514-b93c-40e0-a7af-63bb1055da22" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.363555 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7018514-b93c-40e0-a7af-63bb1055da22" containerName="nova-cell1-novncproxy-novncproxy" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.364501 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.367102 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.367398 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.373051 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.386736 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.544858 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv7gs\" (UniqueName: \"kubernetes.io/projected/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-kube-api-access-qv7gs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.545143 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.545572 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.545807 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.546182 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.573250 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7018514-b93c-40e0-a7af-63bb1055da22" path="/var/lib/kubelet/pods/e7018514-b93c-40e0-a7af-63bb1055da22/volumes" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.648118 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.648189 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.648226 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.648295 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.648358 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv7gs\" (UniqueName: \"kubernetes.io/projected/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-kube-api-access-qv7gs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.654152 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.654704 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.655006 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.655119 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.667351 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv7gs\" (UniqueName: \"kubernetes.io/projected/fb7670ab-2e8e-4af0-a8a6-f8aafbdec117-kube-api-access-qv7gs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117\") " pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:27 crc kubenswrapper[4823]: I0126 15:09:27.699428 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:28 crc kubenswrapper[4823]: I0126 15:09:28.164254 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 26 15:09:28 crc kubenswrapper[4823]: I0126 15:09:28.197143 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117","Type":"ContainerStarted","Data":"a9b178040dee81a128723c36f4a15eb611b5b43412b671d5d541cad56c1d065f"} Jan 26 15:09:28 crc kubenswrapper[4823]: I0126 15:09:28.660991 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 15:09:28 crc kubenswrapper[4823]: I0126 15:09:28.661850 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 15:09:28 crc kubenswrapper[4823]: I0126 15:09:28.661893 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 15:09:28 crc kubenswrapper[4823]: I0126 15:09:28.667008 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.208533 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fb7670ab-2e8e-4af0-a8a6-f8aafbdec117","Type":"ContainerStarted","Data":"cb5a13cece38b3d2a5a3590778792cbcfa3dc2bce1b38f0e2a42e86f228d61b9"} Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.209133 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.216936 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.247342 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.247302461 podStartE2EDuration="2.247302461s" podCreationTimestamp="2026-01-26 15:09:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:09:29.227954785 +0000 UTC m=+1365.913417910" watchObservedRunningTime="2026-01-26 15:09:29.247302461 +0000 UTC m=+1365.932765606" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.470211 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-n7lkg"] Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.472284 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.526488 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-n7lkg"] Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.593154 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.593502 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.593654 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-config\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.593809 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pfvs\" (UniqueName: \"kubernetes.io/projected/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-kube-api-access-2pfvs\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.594061 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.695266 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-config\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.695330 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pfvs\" (UniqueName: \"kubernetes.io/projected/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-kube-api-access-2pfvs\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.695502 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.695581 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.695599 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.696508 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.697430 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-config\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.697474 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.698040 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.721279 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pfvs\" (UniqueName: \"kubernetes.io/projected/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-kube-api-access-2pfvs\") pod \"dnsmasq-dns-68d4b6d797-n7lkg\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:29 crc kubenswrapper[4823]: I0126 15:09:29.837667 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:30 crc kubenswrapper[4823]: I0126 15:09:30.340178 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-n7lkg"] Jan 26 15:09:30 crc kubenswrapper[4823]: W0126 15:09:30.364017 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef112b6f_475d_4dc2_b94e_0a97c5bd5bbf.slice/crio-781fb6818c7dbbe7fd6d53be8c00a6158664289749e640f826bc50b1dd53606e WatchSource:0}: Error finding container 781fb6818c7dbbe7fd6d53be8c00a6158664289749e640f826bc50b1dd53606e: Status 404 returned error can't find the container with id 781fb6818c7dbbe7fd6d53be8c00a6158664289749e640f826bc50b1dd53606e Jan 26 15:09:31 crc kubenswrapper[4823]: I0126 15:09:31.227535 4823 generic.go:334] "Generic (PLEG): container finished" podID="ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" containerID="6320f3075b4a5e8c1fe393acfb978b3fd63f38303e6e83b82ed9c9e6e4c4c24c" exitCode=0 Jan 26 15:09:31 crc kubenswrapper[4823]: I0126 15:09:31.227591 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" event={"ID":"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf","Type":"ContainerDied","Data":"6320f3075b4a5e8c1fe393acfb978b3fd63f38303e6e83b82ed9c9e6e4c4c24c"} Jan 26 15:09:31 crc kubenswrapper[4823]: I0126 15:09:31.227876 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" event={"ID":"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf","Type":"ContainerStarted","Data":"781fb6818c7dbbe7fd6d53be8c00a6158664289749e640f826bc50b1dd53606e"} Jan 26 15:09:31 crc kubenswrapper[4823]: I0126 15:09:31.587875 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:31 crc kubenswrapper[4823]: I0126 15:09:31.588227 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="ceilometer-central-agent" containerID="cri-o://cde1a29c598e44fc76a130c81c27b928a84535d210fd0c0ff32f5348af822f7e" gracePeriod=30 Jan 26 15:09:31 crc kubenswrapper[4823]: I0126 15:09:31.588333 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="sg-core" containerID="cri-o://d8bbdb93ca6bbf8d37a10cc2bf7a18cd91f51c6150ccd80d6bea82987f7b277e" gracePeriod=30 Jan 26 15:09:31 crc kubenswrapper[4823]: I0126 15:09:31.588425 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="ceilometer-notification-agent" containerID="cri-o://b1888ae309c5e47266ffd77f00566216f5a43516e2f5aa50ae1ac3e332438286" gracePeriod=30 Jan 26 15:09:31 crc kubenswrapper[4823]: I0126 15:09:31.588489 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="proxy-httpd" containerID="cri-o://f2cda8971e978483477f0708eade457f12374c2903ab0fd7c1f19bf932a40f3e" gracePeriod=30 Jan 26 15:09:31 crc kubenswrapper[4823]: I0126 15:09:31.602620 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.185:3000/\": EOF" Jan 26 15:09:31 crc kubenswrapper[4823]: I0126 15:09:31.761224 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.252583 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" event={"ID":"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf","Type":"ContainerStarted","Data":"07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197"} Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.253046 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.255605 4823 generic.go:334] "Generic (PLEG): container finished" podID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerID="f2cda8971e978483477f0708eade457f12374c2903ab0fd7c1f19bf932a40f3e" exitCode=0 Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.255650 4823 generic.go:334] "Generic (PLEG): container finished" podID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerID="d8bbdb93ca6bbf8d37a10cc2bf7a18cd91f51c6150ccd80d6bea82987f7b277e" exitCode=2 Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.255661 4823 generic.go:334] "Generic (PLEG): container finished" podID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerID="b1888ae309c5e47266ffd77f00566216f5a43516e2f5aa50ae1ac3e332438286" exitCode=0 Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.255673 4823 generic.go:334] "Generic (PLEG): container finished" podID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerID="cde1a29c598e44fc76a130c81c27b928a84535d210fd0c0ff32f5348af822f7e" exitCode=0 Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.255872 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1","Type":"ContainerDied","Data":"f2cda8971e978483477f0708eade457f12374c2903ab0fd7c1f19bf932a40f3e"} Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.255905 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1","Type":"ContainerDied","Data":"d8bbdb93ca6bbf8d37a10cc2bf7a18cd91f51c6150ccd80d6bea82987f7b277e"} Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.255914 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1","Type":"ContainerDied","Data":"b1888ae309c5e47266ffd77f00566216f5a43516e2f5aa50ae1ac3e332438286"} Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.255925 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1","Type":"ContainerDied","Data":"cde1a29c598e44fc76a130c81c27b928a84535d210fd0c0ff32f5348af822f7e"} Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.255908 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" containerName="nova-api-log" containerID="cri-o://508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21" gracePeriod=30 Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.256106 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" containerName="nova-api-api" containerID="cri-o://760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c" gracePeriod=30 Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.285684 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" podStartSLOduration=3.285658216 podStartE2EDuration="3.285658216s" podCreationTimestamp="2026-01-26 15:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:09:32.281553104 +0000 UTC m=+1368.967016219" watchObservedRunningTime="2026-01-26 15:09:32.285658216 +0000 UTC m=+1368.971121311" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.458199 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.556945 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-sg-core-conf-yaml\") pod \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.557811 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-combined-ca-bundle\") pod \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.557889 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-run-httpd\") pod \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.558046 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-ceilometer-tls-certs\") pod \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.558094 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87stq\" (UniqueName: \"kubernetes.io/projected/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-kube-api-access-87stq\") pod \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.558134 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-config-data\") pod \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.558151 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-scripts\") pod \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.558173 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-log-httpd\") pod \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\" (UID: \"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1\") " Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.558461 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" (UID: "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.558853 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" (UID: "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.559461 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.565009 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-kube-api-access-87stq" (OuterVolumeSpecName: "kube-api-access-87stq") pod "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" (UID: "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1"). InnerVolumeSpecName "kube-api-access-87stq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.565641 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-scripts" (OuterVolumeSpecName: "scripts") pod "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" (UID: "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.597015 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" (UID: "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.612631 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" (UID: "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.656560 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" (UID: "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.661051 4823 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.661093 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87stq\" (UniqueName: \"kubernetes.io/projected/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-kube-api-access-87stq\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.661107 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.661120 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.661132 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.661143 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.664047 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-config-data" (OuterVolumeSpecName: "config-data") pod "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" (UID: "b219be0d-2721-4686-bf1d-0e8c2e7e8fd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.700175 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:32 crc kubenswrapper[4823]: I0126 15:09:32.762601 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.265717 4823 generic.go:334] "Generic (PLEG): container finished" podID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" containerID="508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21" exitCode=143 Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.265804 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"86f25b75-4164-483d-a1cd-6d6509b8e5cd","Type":"ContainerDied","Data":"508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21"} Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.269873 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b219be0d-2721-4686-bf1d-0e8c2e7e8fd1","Type":"ContainerDied","Data":"6df652e47e24455211a28a899c6d2ecd6c29cb5e6e4cdbcb884dfa784e64eee4"} Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.270036 4823 scope.go:117] "RemoveContainer" containerID="f2cda8971e978483477f0708eade457f12374c2903ab0fd7c1f19bf932a40f3e" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.269906 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.292186 4823 scope.go:117] "RemoveContainer" containerID="d8bbdb93ca6bbf8d37a10cc2bf7a18cd91f51c6150ccd80d6bea82987f7b277e" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.318502 4823 scope.go:117] "RemoveContainer" containerID="b1888ae309c5e47266ffd77f00566216f5a43516e2f5aa50ae1ac3e332438286" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.321410 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.348910 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.358698 4823 scope.go:117] "RemoveContainer" containerID="cde1a29c598e44fc76a130c81c27b928a84535d210fd0c0ff32f5348af822f7e" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.358846 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:33 crc kubenswrapper[4823]: E0126 15:09:33.359165 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="ceilometer-notification-agent" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.359185 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="ceilometer-notification-agent" Jan 26 15:09:33 crc kubenswrapper[4823]: E0126 15:09:33.359199 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="proxy-httpd" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.359206 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="proxy-httpd" Jan 26 15:09:33 crc kubenswrapper[4823]: E0126 15:09:33.359225 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="sg-core" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.359232 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="sg-core" Jan 26 15:09:33 crc kubenswrapper[4823]: E0126 15:09:33.359254 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="ceilometer-central-agent" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.359260 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="ceilometer-central-agent" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.359429 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="ceilometer-notification-agent" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.359439 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="ceilometer-central-agent" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.359459 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="proxy-httpd" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.359471 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" containerName="sg-core" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.360976 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.367908 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.371856 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.372057 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.373265 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.475628 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e34f22e-3d29-4543-9a0f-3647fd71c16a-run-httpd\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.475722 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-config-data\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.475775 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.475826 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.475855 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.475881 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e34f22e-3d29-4543-9a0f-3647fd71c16a-log-httpd\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.475913 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxtss\" (UniqueName: \"kubernetes.io/projected/9e34f22e-3d29-4543-9a0f-3647fd71c16a-kube-api-access-sxtss\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.475945 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-scripts\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.571484 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b219be0d-2721-4686-bf1d-0e8c2e7e8fd1" path="/var/lib/kubelet/pods/b219be0d-2721-4686-bf1d-0e8c2e7e8fd1/volumes" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.577926 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e34f22e-3d29-4543-9a0f-3647fd71c16a-run-httpd\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.578004 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-config-data\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.578049 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.578079 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.578102 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.578120 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e34f22e-3d29-4543-9a0f-3647fd71c16a-log-httpd\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.578144 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxtss\" (UniqueName: \"kubernetes.io/projected/9e34f22e-3d29-4543-9a0f-3647fd71c16a-kube-api-access-sxtss\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.578176 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-scripts\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.578543 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e34f22e-3d29-4543-9a0f-3647fd71c16a-run-httpd\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.579236 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e34f22e-3d29-4543-9a0f-3647fd71c16a-log-httpd\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.585262 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-config-data\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.586898 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.587159 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-scripts\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.588143 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.588271 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.599787 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxtss\" (UniqueName: \"kubernetes.io/projected/9e34f22e-3d29-4543-9a0f-3647fd71c16a-kube-api-access-sxtss\") pod \"ceilometer-0\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.692179 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:09:33 crc kubenswrapper[4823]: I0126 15:09:33.720995 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:34 crc kubenswrapper[4823]: I0126 15:09:34.260835 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:34 crc kubenswrapper[4823]: W0126 15:09:34.271742 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e34f22e_3d29_4543_9a0f_3647fd71c16a.slice/crio-4c780eb6f54a2430ced7ae03b1e1540584b8ece249257bf9bc974609d4625260 WatchSource:0}: Error finding container 4c780eb6f54a2430ced7ae03b1e1540584b8ece249257bf9bc974609d4625260: Status 404 returned error can't find the container with id 4c780eb6f54a2430ced7ae03b1e1540584b8ece249257bf9bc974609d4625260 Jan 26 15:09:34 crc kubenswrapper[4823]: I0126 15:09:34.292338 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e34f22e-3d29-4543-9a0f-3647fd71c16a","Type":"ContainerStarted","Data":"4c780eb6f54a2430ced7ae03b1e1540584b8ece249257bf9bc974609d4625260"} Jan 26 15:09:34 crc kubenswrapper[4823]: I0126 15:09:34.508878 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:09:34 crc kubenswrapper[4823]: I0126 15:09:34.509412 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:09:35 crc kubenswrapper[4823]: I0126 15:09:35.351308 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e34f22e-3d29-4543-9a0f-3647fd71c16a","Type":"ContainerStarted","Data":"6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54"} Jan 26 15:09:35 crc kubenswrapper[4823]: I0126 15:09:35.859712 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.058811 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f25b75-4164-483d-a1cd-6d6509b8e5cd-config-data\") pod \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.058908 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6zjl\" (UniqueName: \"kubernetes.io/projected/86f25b75-4164-483d-a1cd-6d6509b8e5cd-kube-api-access-w6zjl\") pod \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.058952 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86f25b75-4164-483d-a1cd-6d6509b8e5cd-logs\") pod \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.058979 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f25b75-4164-483d-a1cd-6d6509b8e5cd-combined-ca-bundle\") pod \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\" (UID: \"86f25b75-4164-483d-a1cd-6d6509b8e5cd\") " Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.059704 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86f25b75-4164-483d-a1cd-6d6509b8e5cd-logs" (OuterVolumeSpecName: "logs") pod "86f25b75-4164-483d-a1cd-6d6509b8e5cd" (UID: "86f25b75-4164-483d-a1cd-6d6509b8e5cd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.063451 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86f25b75-4164-483d-a1cd-6d6509b8e5cd-kube-api-access-w6zjl" (OuterVolumeSpecName: "kube-api-access-w6zjl") pod "86f25b75-4164-483d-a1cd-6d6509b8e5cd" (UID: "86f25b75-4164-483d-a1cd-6d6509b8e5cd"). InnerVolumeSpecName "kube-api-access-w6zjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.086437 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f25b75-4164-483d-a1cd-6d6509b8e5cd-config-data" (OuterVolumeSpecName: "config-data") pod "86f25b75-4164-483d-a1cd-6d6509b8e5cd" (UID: "86f25b75-4164-483d-a1cd-6d6509b8e5cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.110636 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86f25b75-4164-483d-a1cd-6d6509b8e5cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "86f25b75-4164-483d-a1cd-6d6509b8e5cd" (UID: "86f25b75-4164-483d-a1cd-6d6509b8e5cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.163019 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/86f25b75-4164-483d-a1cd-6d6509b8e5cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.163641 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6zjl\" (UniqueName: \"kubernetes.io/projected/86f25b75-4164-483d-a1cd-6d6509b8e5cd-kube-api-access-w6zjl\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.163822 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/86f25b75-4164-483d-a1cd-6d6509b8e5cd-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.164178 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/86f25b75-4164-483d-a1cd-6d6509b8e5cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.363009 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e34f22e-3d29-4543-9a0f-3647fd71c16a","Type":"ContainerStarted","Data":"ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245"} Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.365481 4823 generic.go:334] "Generic (PLEG): container finished" podID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" containerID="760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c" exitCode=0 Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.365518 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"86f25b75-4164-483d-a1cd-6d6509b8e5cd","Type":"ContainerDied","Data":"760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c"} Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.365540 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"86f25b75-4164-483d-a1cd-6d6509b8e5cd","Type":"ContainerDied","Data":"ec3b30c886b0112ec411b4c3e31da6b18432e577c15e34aeb44ae04d4e5b201e"} Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.365562 4823 scope.go:117] "RemoveContainer" containerID="760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.365740 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.402572 4823 scope.go:117] "RemoveContainer" containerID="508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.413254 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.425910 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.437391 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:36 crc kubenswrapper[4823]: E0126 15:09:36.437911 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" containerName="nova-api-log" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.437929 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" containerName="nova-api-log" Jan 26 15:09:36 crc kubenswrapper[4823]: E0126 15:09:36.437972 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" containerName="nova-api-api" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.437980 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" containerName="nova-api-api" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.438211 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" containerName="nova-api-api" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.438232 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" containerName="nova-api-log" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.439292 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.444956 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.495312 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.495548 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.495735 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.540448 4823 scope.go:117] "RemoveContainer" containerID="760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c" Jan 26 15:09:36 crc kubenswrapper[4823]: E0126 15:09:36.541110 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c\": container with ID starting with 760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c not found: ID does not exist" containerID="760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.541160 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c"} err="failed to get container status \"760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c\": rpc error: code = NotFound desc = could not find container \"760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c\": container with ID starting with 760f9f02817b82cb47d3576e56ed507f05e999bec0ba5033056b99bc57d4be5c not found: ID does not exist" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.541191 4823 scope.go:117] "RemoveContainer" containerID="508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21" Jan 26 15:09:36 crc kubenswrapper[4823]: E0126 15:09:36.541602 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21\": container with ID starting with 508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21 not found: ID does not exist" containerID="508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.541627 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21"} err="failed to get container status \"508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21\": rpc error: code = NotFound desc = could not find container \"508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21\": container with ID starting with 508b15a37117537fcdeac4d1ffd0e082cd1c9f6401f7af77dca74a6e445f4d21 not found: ID does not exist" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.599497 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.599671 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1154b81-8b33-4af5-af58-b81ee4657c1e-logs\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.599759 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.599845 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-public-tls-certs\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.599942 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l42wg\" (UniqueName: \"kubernetes.io/projected/a1154b81-8b33-4af5-af58-b81ee4657c1e-kube-api-access-l42wg\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.599979 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-config-data\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.700958 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.701042 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1154b81-8b33-4af5-af58-b81ee4657c1e-logs\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.701081 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.701117 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-public-tls-certs\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.701166 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l42wg\" (UniqueName: \"kubernetes.io/projected/a1154b81-8b33-4af5-af58-b81ee4657c1e-kube-api-access-l42wg\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.701188 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-config-data\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.702090 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1154b81-8b33-4af5-af58-b81ee4657c1e-logs\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.709510 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.710733 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-public-tls-certs\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.711850 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-config-data\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.727898 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.733100 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l42wg\" (UniqueName: \"kubernetes.io/projected/a1154b81-8b33-4af5-af58-b81ee4657c1e-kube-api-access-l42wg\") pod \"nova-api-0\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " pod="openstack/nova-api-0" Jan 26 15:09:36 crc kubenswrapper[4823]: I0126 15:09:36.879265 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:09:37 crc kubenswrapper[4823]: I0126 15:09:37.325857 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:37 crc kubenswrapper[4823]: W0126 15:09:37.332739 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1154b81_8b33_4af5_af58_b81ee4657c1e.slice/crio-c2463c52636b9a52ea66d6242e2bc2abb592fa28c76c70e77a7d8ca2d30a0c66 WatchSource:0}: Error finding container c2463c52636b9a52ea66d6242e2bc2abb592fa28c76c70e77a7d8ca2d30a0c66: Status 404 returned error can't find the container with id c2463c52636b9a52ea66d6242e2bc2abb592fa28c76c70e77a7d8ca2d30a0c66 Jan 26 15:09:37 crc kubenswrapper[4823]: I0126 15:09:37.380541 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e34f22e-3d29-4543-9a0f-3647fd71c16a","Type":"ContainerStarted","Data":"ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7"} Jan 26 15:09:37 crc kubenswrapper[4823]: I0126 15:09:37.383500 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1154b81-8b33-4af5-af58-b81ee4657c1e","Type":"ContainerStarted","Data":"c2463c52636b9a52ea66d6242e2bc2abb592fa28c76c70e77a7d8ca2d30a0c66"} Jan 26 15:09:37 crc kubenswrapper[4823]: I0126 15:09:37.583142 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86f25b75-4164-483d-a1cd-6d6509b8e5cd" path="/var/lib/kubelet/pods/86f25b75-4164-483d-a1cd-6d6509b8e5cd/volumes" Jan 26 15:09:37 crc kubenswrapper[4823]: I0126 15:09:37.700167 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:37 crc kubenswrapper[4823]: I0126 15:09:37.725226 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.396468 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e34f22e-3d29-4543-9a0f-3647fd71c16a","Type":"ContainerStarted","Data":"141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6"} Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.396761 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="ceilometer-notification-agent" containerID="cri-o://ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245" gracePeriod=30 Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.396628 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="proxy-httpd" containerID="cri-o://141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6" gracePeriod=30 Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.396644 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="sg-core" containerID="cri-o://ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7" gracePeriod=30 Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.396840 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.396590 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="ceilometer-central-agent" containerID="cri-o://6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54" gracePeriod=30 Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.406857 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1154b81-8b33-4af5-af58-b81ee4657c1e","Type":"ContainerStarted","Data":"cffa217ccc50d370619647ad0d153f771048807f0c8af80b4c910be8fe0ca577"} Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.406907 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1154b81-8b33-4af5-af58-b81ee4657c1e","Type":"ContainerStarted","Data":"e355dcc85d283945c0300a5a345e025a45c32489e4ad3e9b351abf5b857e479b"} Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.421027 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9427388749999999 podStartE2EDuration="5.42100548s" podCreationTimestamp="2026-01-26 15:09:33 +0000 UTC" firstStartedPulling="2026-01-26 15:09:34.274216527 +0000 UTC m=+1370.959679632" lastFinishedPulling="2026-01-26 15:09:37.752483132 +0000 UTC m=+1374.437946237" observedRunningTime="2026-01-26 15:09:38.416706833 +0000 UTC m=+1375.102169938" watchObservedRunningTime="2026-01-26 15:09:38.42100548 +0000 UTC m=+1375.106468585" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.424572 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.469899 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.469873709 podStartE2EDuration="2.469873709s" podCreationTimestamp="2026-01-26 15:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:09:38.445029263 +0000 UTC m=+1375.130492368" watchObservedRunningTime="2026-01-26 15:09:38.469873709 +0000 UTC m=+1375.155336804" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.667686 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-h7nxv"] Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.677170 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.683459 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.684430 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.702886 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-h7nxv"] Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.743468 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jldv\" (UniqueName: \"kubernetes.io/projected/0a0d3991-e82c-495e-bce4-2ce236179c32-kube-api-access-7jldv\") pod \"nova-cell1-cell-mapping-h7nxv\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.743985 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-scripts\") pod \"nova-cell1-cell-mapping-h7nxv\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.744298 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-h7nxv\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.744499 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-config-data\") pod \"nova-cell1-cell-mapping-h7nxv\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.845509 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-h7nxv\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.845618 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-config-data\") pod \"nova-cell1-cell-mapping-h7nxv\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.845669 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jldv\" (UniqueName: \"kubernetes.io/projected/0a0d3991-e82c-495e-bce4-2ce236179c32-kube-api-access-7jldv\") pod \"nova-cell1-cell-mapping-h7nxv\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.845740 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-scripts\") pod \"nova-cell1-cell-mapping-h7nxv\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.852427 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-h7nxv\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.854930 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-scripts\") pod \"nova-cell1-cell-mapping-h7nxv\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.857909 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-config-data\") pod \"nova-cell1-cell-mapping-h7nxv\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:38 crc kubenswrapper[4823]: I0126 15:09:38.869565 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jldv\" (UniqueName: \"kubernetes.io/projected/0a0d3991-e82c-495e-bce4-2ce236179c32-kube-api-access-7jldv\") pod \"nova-cell1-cell-mapping-h7nxv\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:39 crc kubenswrapper[4823]: I0126 15:09:39.013946 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:39 crc kubenswrapper[4823]: I0126 15:09:39.420799 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerID="141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6" exitCode=0 Jan 26 15:09:39 crc kubenswrapper[4823]: I0126 15:09:39.421174 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerID="ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7" exitCode=2 Jan 26 15:09:39 crc kubenswrapper[4823]: I0126 15:09:39.421182 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerID="ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245" exitCode=0 Jan 26 15:09:39 crc kubenswrapper[4823]: I0126 15:09:39.420892 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e34f22e-3d29-4543-9a0f-3647fd71c16a","Type":"ContainerDied","Data":"141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6"} Jan 26 15:09:39 crc kubenswrapper[4823]: I0126 15:09:39.421248 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e34f22e-3d29-4543-9a0f-3647fd71c16a","Type":"ContainerDied","Data":"ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7"} Jan 26 15:09:39 crc kubenswrapper[4823]: I0126 15:09:39.421268 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e34f22e-3d29-4543-9a0f-3647fd71c16a","Type":"ContainerDied","Data":"ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245"} Jan 26 15:09:39 crc kubenswrapper[4823]: I0126 15:09:39.497611 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-h7nxv"] Jan 26 15:09:39 crc kubenswrapper[4823]: W0126 15:09:39.504973 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a0d3991_e82c_495e_bce4_2ce236179c32.slice/crio-7c8d107f4b9627e65306adc6be9521ad4756dd6a8df136277617902a5872f998 WatchSource:0}: Error finding container 7c8d107f4b9627e65306adc6be9521ad4756dd6a8df136277617902a5872f998: Status 404 returned error can't find the container with id 7c8d107f4b9627e65306adc6be9521ad4756dd6a8df136277617902a5872f998 Jan 26 15:09:39 crc kubenswrapper[4823]: I0126 15:09:39.839574 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:09:39 crc kubenswrapper[4823]: I0126 15:09:39.916639 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-bb6gn"] Jan 26 15:09:39 crc kubenswrapper[4823]: I0126 15:09:39.916936 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" podUID="eae4b870-d305-4b7f-8b9c-a30366d0123c" containerName="dnsmasq-dns" containerID="cri-o://882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15" gracePeriod=10 Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.010431 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.069053 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-combined-ca-bundle\") pod \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.069393 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-config-data\") pod \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.069444 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-sg-core-conf-yaml\") pod \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.069464 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxtss\" (UniqueName: \"kubernetes.io/projected/9e34f22e-3d29-4543-9a0f-3647fd71c16a-kube-api-access-sxtss\") pod \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.069490 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e34f22e-3d29-4543-9a0f-3647fd71c16a-run-httpd\") pod \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.070178 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e34f22e-3d29-4543-9a0f-3647fd71c16a-log-httpd\") pod \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.070238 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-ceilometer-tls-certs\") pod \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.070277 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-scripts\") pod \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\" (UID: \"9e34f22e-3d29-4543-9a0f-3647fd71c16a\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.070436 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e34f22e-3d29-4543-9a0f-3647fd71c16a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9e34f22e-3d29-4543-9a0f-3647fd71c16a" (UID: "9e34f22e-3d29-4543-9a0f-3647fd71c16a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.070777 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e34f22e-3d29-4543-9a0f-3647fd71c16a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.070899 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e34f22e-3d29-4543-9a0f-3647fd71c16a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9e34f22e-3d29-4543-9a0f-3647fd71c16a" (UID: "9e34f22e-3d29-4543-9a0f-3647fd71c16a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.074841 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e34f22e-3d29-4543-9a0f-3647fd71c16a-kube-api-access-sxtss" (OuterVolumeSpecName: "kube-api-access-sxtss") pod "9e34f22e-3d29-4543-9a0f-3647fd71c16a" (UID: "9e34f22e-3d29-4543-9a0f-3647fd71c16a"). InnerVolumeSpecName "kube-api-access-sxtss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.077088 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-scripts" (OuterVolumeSpecName: "scripts") pod "9e34f22e-3d29-4543-9a0f-3647fd71c16a" (UID: "9e34f22e-3d29-4543-9a0f-3647fd71c16a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.118800 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9e34f22e-3d29-4543-9a0f-3647fd71c16a" (UID: "9e34f22e-3d29-4543-9a0f-3647fd71c16a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.154822 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9e34f22e-3d29-4543-9a0f-3647fd71c16a" (UID: "9e34f22e-3d29-4543-9a0f-3647fd71c16a"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.158485 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e34f22e-3d29-4543-9a0f-3647fd71c16a" (UID: "9e34f22e-3d29-4543-9a0f-3647fd71c16a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.172356 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.172421 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.172434 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxtss\" (UniqueName: \"kubernetes.io/projected/9e34f22e-3d29-4543-9a0f-3647fd71c16a-kube-api-access-sxtss\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.172449 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9e34f22e-3d29-4543-9a0f-3647fd71c16a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.172461 4823 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.172472 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.195403 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-config-data" (OuterVolumeSpecName: "config-data") pod "9e34f22e-3d29-4543-9a0f-3647fd71c16a" (UID: "9e34f22e-3d29-4543-9a0f-3647fd71c16a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.274639 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e34f22e-3d29-4543-9a0f-3647fd71c16a-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.363303 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.377009 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-ovsdbserver-nb\") pod \"eae4b870-d305-4b7f-8b9c-a30366d0123c\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.377099 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-ovsdbserver-sb\") pod \"eae4b870-d305-4b7f-8b9c-a30366d0123c\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.377149 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-config\") pod \"eae4b870-d305-4b7f-8b9c-a30366d0123c\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.377188 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-dns-svc\") pod \"eae4b870-d305-4b7f-8b9c-a30366d0123c\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.377324 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j58wn\" (UniqueName: \"kubernetes.io/projected/eae4b870-d305-4b7f-8b9c-a30366d0123c-kube-api-access-j58wn\") pod \"eae4b870-d305-4b7f-8b9c-a30366d0123c\" (UID: \"eae4b870-d305-4b7f-8b9c-a30366d0123c\") " Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.386015 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae4b870-d305-4b7f-8b9c-a30366d0123c-kube-api-access-j58wn" (OuterVolumeSpecName: "kube-api-access-j58wn") pod "eae4b870-d305-4b7f-8b9c-a30366d0123c" (UID: "eae4b870-d305-4b7f-8b9c-a30366d0123c"). InnerVolumeSpecName "kube-api-access-j58wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.436204 4823 generic.go:334] "Generic (PLEG): container finished" podID="eae4b870-d305-4b7f-8b9c-a30366d0123c" containerID="882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15" exitCode=0 Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.436260 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.436271 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" event={"ID":"eae4b870-d305-4b7f-8b9c-a30366d0123c","Type":"ContainerDied","Data":"882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15"} Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.437234 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" event={"ID":"eae4b870-d305-4b7f-8b9c-a30366d0123c","Type":"ContainerDied","Data":"6009b840b9d698daa02d442e16b89588a328d5629b4bdd83442e2ed2ec12c927"} Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.437254 4823 scope.go:117] "RemoveContainer" containerID="882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.464906 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-h7nxv" event={"ID":"0a0d3991-e82c-495e-bce4-2ce236179c32","Type":"ContainerStarted","Data":"0d2e3eeeaf3ba4095f8b73ff504f0ba331ee1f87ec82c9a9f6a6dbc3c9679d30"} Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.464975 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-h7nxv" event={"ID":"0a0d3991-e82c-495e-bce4-2ce236179c32","Type":"ContainerStarted","Data":"7c8d107f4b9627e65306adc6be9521ad4756dd6a8df136277617902a5872f998"} Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.465080 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-config" (OuterVolumeSpecName: "config") pod "eae4b870-d305-4b7f-8b9c-a30366d0123c" (UID: "eae4b870-d305-4b7f-8b9c-a30366d0123c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.477251 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerID="6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54" exitCode=0 Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.477322 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e34f22e-3d29-4543-9a0f-3647fd71c16a","Type":"ContainerDied","Data":"6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54"} Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.477379 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9e34f22e-3d29-4543-9a0f-3647fd71c16a","Type":"ContainerDied","Data":"4c780eb6f54a2430ced7ae03b1e1540584b8ece249257bf9bc974609d4625260"} Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.477471 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.482589 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eae4b870-d305-4b7f-8b9c-a30366d0123c" (UID: "eae4b870-d305-4b7f-8b9c-a30366d0123c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.494703 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-h7nxv" podStartSLOduration=2.494680565 podStartE2EDuration="2.494680565s" podCreationTimestamp="2026-01-26 15:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:09:40.490586864 +0000 UTC m=+1377.176049969" watchObservedRunningTime="2026-01-26 15:09:40.494680565 +0000 UTC m=+1377.180143670" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.502219 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eae4b870-d305-4b7f-8b9c-a30366d0123c" (UID: "eae4b870-d305-4b7f-8b9c-a30366d0123c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.509600 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.509647 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j58wn\" (UniqueName: \"kubernetes.io/projected/eae4b870-d305-4b7f-8b9c-a30366d0123c-kube-api-access-j58wn\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.509661 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.509672 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.537323 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eae4b870-d305-4b7f-8b9c-a30366d0123c" (UID: "eae4b870-d305-4b7f-8b9c-a30366d0123c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.604929 4823 scope.go:117] "RemoveContainer" containerID="79e53c37b022bcfd4817f80ea92742c6d4235b833f3faf5f7aac56cabd6564e9" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.611589 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eae4b870-d305-4b7f-8b9c-a30366d0123c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.626457 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.649951 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.659777 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:40 crc kubenswrapper[4823]: E0126 15:09:40.660213 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="proxy-httpd" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.660235 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="proxy-httpd" Jan 26 15:09:40 crc kubenswrapper[4823]: E0126 15:09:40.660249 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="ceilometer-notification-agent" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.660256 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="ceilometer-notification-agent" Jan 26 15:09:40 crc kubenswrapper[4823]: E0126 15:09:40.660266 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae4b870-d305-4b7f-8b9c-a30366d0123c" containerName="dnsmasq-dns" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.660272 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae4b870-d305-4b7f-8b9c-a30366d0123c" containerName="dnsmasq-dns" Jan 26 15:09:40 crc kubenswrapper[4823]: E0126 15:09:40.660286 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="sg-core" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.660292 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="sg-core" Jan 26 15:09:40 crc kubenswrapper[4823]: E0126 15:09:40.660304 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae4b870-d305-4b7f-8b9c-a30366d0123c" containerName="init" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.660310 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae4b870-d305-4b7f-8b9c-a30366d0123c" containerName="init" Jan 26 15:09:40 crc kubenswrapper[4823]: E0126 15:09:40.660324 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="ceilometer-central-agent" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.660330 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="ceilometer-central-agent" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.660535 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="ceilometer-notification-agent" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.660547 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="proxy-httpd" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.660554 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae4b870-d305-4b7f-8b9c-a30366d0123c" containerName="dnsmasq-dns" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.660563 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="sg-core" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.660577 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" containerName="ceilometer-central-agent" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.661069 4823 scope.go:117] "RemoveContainer" containerID="882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15" Jan 26 15:09:40 crc kubenswrapper[4823]: E0126 15:09:40.661787 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15\": container with ID starting with 882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15 not found: ID does not exist" containerID="882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.661842 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15"} err="failed to get container status \"882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15\": rpc error: code = NotFound desc = could not find container \"882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15\": container with ID starting with 882c26be343479bca7f03fa35c391022e9317cd5b8385a5a14ec56f967969a15 not found: ID does not exist" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.661872 4823 scope.go:117] "RemoveContainer" containerID="79e53c37b022bcfd4817f80ea92742c6d4235b833f3faf5f7aac56cabd6564e9" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.662272 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: E0126 15:09:40.662585 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79e53c37b022bcfd4817f80ea92742c6d4235b833f3faf5f7aac56cabd6564e9\": container with ID starting with 79e53c37b022bcfd4817f80ea92742c6d4235b833f3faf5f7aac56cabd6564e9 not found: ID does not exist" containerID="79e53c37b022bcfd4817f80ea92742c6d4235b833f3faf5f7aac56cabd6564e9" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.662704 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79e53c37b022bcfd4817f80ea92742c6d4235b833f3faf5f7aac56cabd6564e9"} err="failed to get container status \"79e53c37b022bcfd4817f80ea92742c6d4235b833f3faf5f7aac56cabd6564e9\": rpc error: code = NotFound desc = could not find container \"79e53c37b022bcfd4817f80ea92742c6d4235b833f3faf5f7aac56cabd6564e9\": container with ID starting with 79e53c37b022bcfd4817f80ea92742c6d4235b833f3faf5f7aac56cabd6564e9 not found: ID does not exist" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.662798 4823 scope.go:117] "RemoveContainer" containerID="141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.666641 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.666811 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.666972 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.678082 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.695298 4823 scope.go:117] "RemoveContainer" containerID="ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.716509 4823 scope.go:117] "RemoveContainer" containerID="ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.739643 4823 scope.go:117] "RemoveContainer" containerID="6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.769611 4823 scope.go:117] "RemoveContainer" containerID="141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6" Jan 26 15:09:40 crc kubenswrapper[4823]: E0126 15:09:40.770258 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6\": container with ID starting with 141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6 not found: ID does not exist" containerID="141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.770292 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6"} err="failed to get container status \"141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6\": rpc error: code = NotFound desc = could not find container \"141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6\": container with ID starting with 141bb680903cd1eee42fd3c4c087e37d49b0b669f153f6a3bc2d22e02403d0a6 not found: ID does not exist" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.770318 4823 scope.go:117] "RemoveContainer" containerID="ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7" Jan 26 15:09:40 crc kubenswrapper[4823]: E0126 15:09:40.770688 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7\": container with ID starting with ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7 not found: ID does not exist" containerID="ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.770708 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7"} err="failed to get container status \"ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7\": rpc error: code = NotFound desc = could not find container \"ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7\": container with ID starting with ac92eed8656b5cdfcfdd4c24e17fb9915e888b21a3d15bcc56423d3d707b81f7 not found: ID does not exist" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.770892 4823 scope.go:117] "RemoveContainer" containerID="ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245" Jan 26 15:09:40 crc kubenswrapper[4823]: E0126 15:09:40.771824 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245\": container with ID starting with ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245 not found: ID does not exist" containerID="ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.771848 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245"} err="failed to get container status \"ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245\": rpc error: code = NotFound desc = could not find container \"ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245\": container with ID starting with ca9e2ed77e244dbbe266daa430ef368d449751561fde400b165c5844c11ed245 not found: ID does not exist" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.771868 4823 scope.go:117] "RemoveContainer" containerID="6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54" Jan 26 15:09:40 crc kubenswrapper[4823]: E0126 15:09:40.772280 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54\": container with ID starting with 6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54 not found: ID does not exist" containerID="6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.772323 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54"} err="failed to get container status \"6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54\": rpc error: code = NotFound desc = could not find container \"6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54\": container with ID starting with 6e785e66fa19e51ba72e81abf190c0e074d79c86b31f6af10b879c9cadf03c54 not found: ID does not exist" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.781518 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-bb6gn"] Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.791990 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-bb6gn"] Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.816793 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49ef12-8848-4054-af2f-18d5f98522c8-log-httpd\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.816889 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49ef12-8848-4054-af2f-18d5f98522c8-run-httpd\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.816914 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwspt\" (UniqueName: \"kubernetes.io/projected/6c49ef12-8848-4054-af2f-18d5f98522c8-kube-api-access-gwspt\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.816934 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.816963 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.817257 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.817415 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-config-data\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.817469 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-scripts\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.919661 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49ef12-8848-4054-af2f-18d5f98522c8-log-httpd\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.919765 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49ef12-8848-4054-af2f-18d5f98522c8-run-httpd\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.919792 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwspt\" (UniqueName: \"kubernetes.io/projected/6c49ef12-8848-4054-af2f-18d5f98522c8-kube-api-access-gwspt\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.919813 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.919837 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.919893 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.919923 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-config-data\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.919943 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-scripts\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.920478 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49ef12-8848-4054-af2f-18d5f98522c8-run-httpd\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.920591 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49ef12-8848-4054-af2f-18d5f98522c8-log-httpd\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.927693 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-config-data\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.929031 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-scripts\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.929576 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.930503 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.939644 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.941886 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwspt\" (UniqueName: \"kubernetes.io/projected/6c49ef12-8848-4054-af2f-18d5f98522c8-kube-api-access-gwspt\") pod \"ceilometer-0\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " pod="openstack/ceilometer-0" Jan 26 15:09:40 crc kubenswrapper[4823]: I0126 15:09:40.998052 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:09:41 crc kubenswrapper[4823]: I0126 15:09:41.492764 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:09:41 crc kubenswrapper[4823]: I0126 15:09:41.574303 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e34f22e-3d29-4543-9a0f-3647fd71c16a" path="/var/lib/kubelet/pods/9e34f22e-3d29-4543-9a0f-3647fd71c16a/volumes" Jan 26 15:09:41 crc kubenswrapper[4823]: I0126 15:09:41.575254 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae4b870-d305-4b7f-8b9c-a30366d0123c" path="/var/lib/kubelet/pods/eae4b870-d305-4b7f-8b9c-a30366d0123c/volumes" Jan 26 15:09:42 crc kubenswrapper[4823]: I0126 15:09:42.504842 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49ef12-8848-4054-af2f-18d5f98522c8","Type":"ContainerStarted","Data":"7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552"} Jan 26 15:09:42 crc kubenswrapper[4823]: I0126 15:09:42.505773 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49ef12-8848-4054-af2f-18d5f98522c8","Type":"ContainerStarted","Data":"cc6f3af804fa08666baf3a61ebfcba13097009f2c60fc380c4ccdbc5a779a914"} Jan 26 15:09:43 crc kubenswrapper[4823]: I0126 15:09:43.534219 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49ef12-8848-4054-af2f-18d5f98522c8","Type":"ContainerStarted","Data":"ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a"} Jan 26 15:09:44 crc kubenswrapper[4823]: I0126 15:09:44.545094 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49ef12-8848-4054-af2f-18d5f98522c8","Type":"ContainerStarted","Data":"4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665"} Jan 26 15:09:45 crc kubenswrapper[4823]: I0126 15:09:45.244728 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b8cf6657-bb6gn" podUID="eae4b870-d305-4b7f-8b9c-a30366d0123c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.178:5353: i/o timeout" Jan 26 15:09:45 crc kubenswrapper[4823]: I0126 15:09:45.556824 4823 generic.go:334] "Generic (PLEG): container finished" podID="0a0d3991-e82c-495e-bce4-2ce236179c32" containerID="0d2e3eeeaf3ba4095f8b73ff504f0ba331ee1f87ec82c9a9f6a6dbc3c9679d30" exitCode=0 Jan 26 15:09:45 crc kubenswrapper[4823]: I0126 15:09:45.556948 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-h7nxv" event={"ID":"0a0d3991-e82c-495e-bce4-2ce236179c32","Type":"ContainerDied","Data":"0d2e3eeeaf3ba4095f8b73ff504f0ba331ee1f87ec82c9a9f6a6dbc3c9679d30"} Jan 26 15:09:46 crc kubenswrapper[4823]: I0126 15:09:46.573675 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49ef12-8848-4054-af2f-18d5f98522c8","Type":"ContainerStarted","Data":"01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9"} Jan 26 15:09:46 crc kubenswrapper[4823]: I0126 15:09:46.599318 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.98323407 podStartE2EDuration="6.599299974s" podCreationTimestamp="2026-01-26 15:09:40 +0000 UTC" firstStartedPulling="2026-01-26 15:09:41.491871459 +0000 UTC m=+1378.177334564" lastFinishedPulling="2026-01-26 15:09:46.107937363 +0000 UTC m=+1382.793400468" observedRunningTime="2026-01-26 15:09:46.597414323 +0000 UTC m=+1383.282877438" watchObservedRunningTime="2026-01-26 15:09:46.599299974 +0000 UTC m=+1383.284763079" Jan 26 15:09:46 crc kubenswrapper[4823]: I0126 15:09:46.879610 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:09:46 crc kubenswrapper[4823]: I0126 15:09:46.880075 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:09:46 crc kubenswrapper[4823]: I0126 15:09:46.992092 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.144731 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-config-data\") pod \"0a0d3991-e82c-495e-bce4-2ce236179c32\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.144914 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-scripts\") pod \"0a0d3991-e82c-495e-bce4-2ce236179c32\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.144977 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-combined-ca-bundle\") pod \"0a0d3991-e82c-495e-bce4-2ce236179c32\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.145013 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jldv\" (UniqueName: \"kubernetes.io/projected/0a0d3991-e82c-495e-bce4-2ce236179c32-kube-api-access-7jldv\") pod \"0a0d3991-e82c-495e-bce4-2ce236179c32\" (UID: \"0a0d3991-e82c-495e-bce4-2ce236179c32\") " Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.152682 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-scripts" (OuterVolumeSpecName: "scripts") pod "0a0d3991-e82c-495e-bce4-2ce236179c32" (UID: "0a0d3991-e82c-495e-bce4-2ce236179c32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.156972 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0d3991-e82c-495e-bce4-2ce236179c32-kube-api-access-7jldv" (OuterVolumeSpecName: "kube-api-access-7jldv") pod "0a0d3991-e82c-495e-bce4-2ce236179c32" (UID: "0a0d3991-e82c-495e-bce4-2ce236179c32"). InnerVolumeSpecName "kube-api-access-7jldv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.175435 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a0d3991-e82c-495e-bce4-2ce236179c32" (UID: "0a0d3991-e82c-495e-bce4-2ce236179c32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.177562 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-config-data" (OuterVolumeSpecName: "config-data") pod "0a0d3991-e82c-495e-bce4-2ce236179c32" (UID: "0a0d3991-e82c-495e-bce4-2ce236179c32"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.247240 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jldv\" (UniqueName: \"kubernetes.io/projected/0a0d3991-e82c-495e-bce4-2ce236179c32-kube-api-access-7jldv\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.247282 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.247292 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.247301 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0d3991-e82c-495e-bce4-2ce236179c32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.590419 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-h7nxv" event={"ID":"0a0d3991-e82c-495e-bce4-2ce236179c32","Type":"ContainerDied","Data":"7c8d107f4b9627e65306adc6be9521ad4756dd6a8df136277617902a5872f998"} Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.590476 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c8d107f4b9627e65306adc6be9521ad4756dd6a8df136277617902a5872f998" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.590632 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-h7nxv" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.590770 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.809655 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.818806 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.819345 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="56f3c039-ed21-4a16-a877-757cfff7e8b9" containerName="nova-scheduler-scheduler" containerID="cri-o://da1cefe1163f457fd2e50604f0809f5046e7e32a4eaa9a58b4e6bcb63c371c88" gracePeriod=30 Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.856575 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.856858 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="193cf951-14a6-4175-95a9-e832702f5576" containerName="nova-metadata-log" containerID="cri-o://44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59" gracePeriod=30 Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.857243 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="193cf951-14a6-4175-95a9-e832702f5576" containerName="nova-metadata-metadata" containerID="cri-o://de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2" gracePeriod=30 Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.894630 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a1154b81-8b33-4af5-af58-b81ee4657c1e" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.190:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:09:47 crc kubenswrapper[4823]: I0126 15:09:47.894977 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a1154b81-8b33-4af5-af58-b81ee4657c1e" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.190:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:09:48 crc kubenswrapper[4823]: E0126 15:09:48.362399 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da1cefe1163f457fd2e50604f0809f5046e7e32a4eaa9a58b4e6bcb63c371c88" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 15:09:48 crc kubenswrapper[4823]: E0126 15:09:48.371626 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da1cefe1163f457fd2e50604f0809f5046e7e32a4eaa9a58b4e6bcb63c371c88" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 15:09:48 crc kubenswrapper[4823]: E0126 15:09:48.373385 4823 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="da1cefe1163f457fd2e50604f0809f5046e7e32a4eaa9a58b4e6bcb63c371c88" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 26 15:09:48 crc kubenswrapper[4823]: E0126 15:09:48.373432 4823 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="56f3c039-ed21-4a16-a877-757cfff7e8b9" containerName="nova-scheduler-scheduler" Jan 26 15:09:48 crc kubenswrapper[4823]: I0126 15:09:48.601881 4823 generic.go:334] "Generic (PLEG): container finished" podID="193cf951-14a6-4175-95a9-e832702f5576" containerID="44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59" exitCode=143 Jan 26 15:09:48 crc kubenswrapper[4823]: I0126 15:09:48.602011 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"193cf951-14a6-4175-95a9-e832702f5576","Type":"ContainerDied","Data":"44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59"} Jan 26 15:09:48 crc kubenswrapper[4823]: I0126 15:09:48.602513 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a1154b81-8b33-4af5-af58-b81ee4657c1e" containerName="nova-api-api" containerID="cri-o://cffa217ccc50d370619647ad0d153f771048807f0c8af80b4c910be8fe0ca577" gracePeriod=30 Jan 26 15:09:48 crc kubenswrapper[4823]: I0126 15:09:48.602469 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a1154b81-8b33-4af5-af58-b81ee4657c1e" containerName="nova-api-log" containerID="cri-o://e355dcc85d283945c0300a5a345e025a45c32489e4ad3e9b351abf5b857e479b" gracePeriod=30 Jan 26 15:09:49 crc kubenswrapper[4823]: I0126 15:09:49.615131 4823 generic.go:334] "Generic (PLEG): container finished" podID="a1154b81-8b33-4af5-af58-b81ee4657c1e" containerID="e355dcc85d283945c0300a5a345e025a45c32489e4ad3e9b351abf5b857e479b" exitCode=143 Jan 26 15:09:49 crc kubenswrapper[4823]: I0126 15:09:49.615192 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1154b81-8b33-4af5-af58-b81ee4657c1e","Type":"ContainerDied","Data":"e355dcc85d283945c0300a5a345e025a45c32489e4ad3e9b351abf5b857e479b"} Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.470986 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.636850 4823 generic.go:334] "Generic (PLEG): container finished" podID="193cf951-14a6-4175-95a9-e832702f5576" containerID="de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2" exitCode=0 Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.636926 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.636926 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"193cf951-14a6-4175-95a9-e832702f5576","Type":"ContainerDied","Data":"de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2"} Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.636977 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"193cf951-14a6-4175-95a9-e832702f5576","Type":"ContainerDied","Data":"01304225a42d5141174c91d5baa13e85d3d47c864a48815aa86bfccba76bf1b5"} Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.637005 4823 scope.go:117] "RemoveContainer" containerID="de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.640055 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-combined-ca-bundle\") pod \"193cf951-14a6-4175-95a9-e832702f5576\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.640114 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/193cf951-14a6-4175-95a9-e832702f5576-logs\") pod \"193cf951-14a6-4175-95a9-e832702f5576\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.640141 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvgbn\" (UniqueName: \"kubernetes.io/projected/193cf951-14a6-4175-95a9-e832702f5576-kube-api-access-xvgbn\") pod \"193cf951-14a6-4175-95a9-e832702f5576\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.640160 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-nova-metadata-tls-certs\") pod \"193cf951-14a6-4175-95a9-e832702f5576\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.640204 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-config-data\") pod \"193cf951-14a6-4175-95a9-e832702f5576\" (UID: \"193cf951-14a6-4175-95a9-e832702f5576\") " Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.641734 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/193cf951-14a6-4175-95a9-e832702f5576-logs" (OuterVolumeSpecName: "logs") pod "193cf951-14a6-4175-95a9-e832702f5576" (UID: "193cf951-14a6-4175-95a9-e832702f5576"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.647530 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/193cf951-14a6-4175-95a9-e832702f5576-kube-api-access-xvgbn" (OuterVolumeSpecName: "kube-api-access-xvgbn") pod "193cf951-14a6-4175-95a9-e832702f5576" (UID: "193cf951-14a6-4175-95a9-e832702f5576"). InnerVolumeSpecName "kube-api-access-xvgbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.679774 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "193cf951-14a6-4175-95a9-e832702f5576" (UID: "193cf951-14a6-4175-95a9-e832702f5576"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.688680 4823 scope.go:117] "RemoveContainer" containerID="44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.708055 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-config-data" (OuterVolumeSpecName: "config-data") pod "193cf951-14a6-4175-95a9-e832702f5576" (UID: "193cf951-14a6-4175-95a9-e832702f5576"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.723553 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.723860 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/193cf951-14a6-4175-95a9-e832702f5576-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.723940 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvgbn\" (UniqueName: \"kubernetes.io/projected/193cf951-14a6-4175-95a9-e832702f5576-kube-api-access-xvgbn\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.724013 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.729452 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "193cf951-14a6-4175-95a9-e832702f5576" (UID: "193cf951-14a6-4175-95a9-e832702f5576"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.757086 4823 scope.go:117] "RemoveContainer" containerID="de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2" Jan 26 15:09:51 crc kubenswrapper[4823]: E0126 15:09:51.757879 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2\": container with ID starting with de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2 not found: ID does not exist" containerID="de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.757938 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2"} err="failed to get container status \"de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2\": rpc error: code = NotFound desc = could not find container \"de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2\": container with ID starting with de4901e13db65e999b07305bf30de2de86e4fe7d99a217f6036c63ce18fd87f2 not found: ID does not exist" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.757968 4823 scope.go:117] "RemoveContainer" containerID="44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59" Jan 26 15:09:51 crc kubenswrapper[4823]: E0126 15:09:51.758580 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59\": container with ID starting with 44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59 not found: ID does not exist" containerID="44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.758704 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59"} err="failed to get container status \"44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59\": rpc error: code = NotFound desc = could not find container \"44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59\": container with ID starting with 44e8600050e88c20e667fecaa6cd9a6d6a6d210bc24d107da78f7cf74474ab59 not found: ID does not exist" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.825861 4823 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/193cf951-14a6-4175-95a9-e832702f5576-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.973797 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.981961 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.998582 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:09:51 crc kubenswrapper[4823]: E0126 15:09:51.999588 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193cf951-14a6-4175-95a9-e832702f5576" containerName="nova-metadata-log" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.999734 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="193cf951-14a6-4175-95a9-e832702f5576" containerName="nova-metadata-log" Jan 26 15:09:51 crc kubenswrapper[4823]: E0126 15:09:51.999821 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0d3991-e82c-495e-bce4-2ce236179c32" containerName="nova-manage" Jan 26 15:09:51 crc kubenswrapper[4823]: I0126 15:09:51.999928 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0d3991-e82c-495e-bce4-2ce236179c32" containerName="nova-manage" Jan 26 15:09:52 crc kubenswrapper[4823]: E0126 15:09:52.000026 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="193cf951-14a6-4175-95a9-e832702f5576" containerName="nova-metadata-metadata" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.000090 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="193cf951-14a6-4175-95a9-e832702f5576" containerName="nova-metadata-metadata" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.020199 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="193cf951-14a6-4175-95a9-e832702f5576" containerName="nova-metadata-log" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.020976 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="193cf951-14a6-4175-95a9-e832702f5576" containerName="nova-metadata-metadata" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.021098 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0d3991-e82c-495e-bce4-2ce236179c32" containerName="nova-manage" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.022680 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.022899 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.026076 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.026853 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.133635 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948e5a03-94e3-47a1-a589-0738ba9fec3d-logs\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.134078 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/948e5a03-94e3-47a1-a589-0738ba9fec3d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.134108 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpl4g\" (UniqueName: \"kubernetes.io/projected/948e5a03-94e3-47a1-a589-0738ba9fec3d-kube-api-access-dpl4g\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.135692 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948e5a03-94e3-47a1-a589-0738ba9fec3d-config-data\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.135836 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948e5a03-94e3-47a1-a589-0738ba9fec3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.237535 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948e5a03-94e3-47a1-a589-0738ba9fec3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.237645 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948e5a03-94e3-47a1-a589-0738ba9fec3d-logs\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.237677 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/948e5a03-94e3-47a1-a589-0738ba9fec3d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.237707 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpl4g\" (UniqueName: \"kubernetes.io/projected/948e5a03-94e3-47a1-a589-0738ba9fec3d-kube-api-access-dpl4g\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.237775 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948e5a03-94e3-47a1-a589-0738ba9fec3d-config-data\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.238573 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/948e5a03-94e3-47a1-a589-0738ba9fec3d-logs\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.241590 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/948e5a03-94e3-47a1-a589-0738ba9fec3d-config-data\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.241790 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/948e5a03-94e3-47a1-a589-0738ba9fec3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.242624 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/948e5a03-94e3-47a1-a589-0738ba9fec3d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.256701 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpl4g\" (UniqueName: \"kubernetes.io/projected/948e5a03-94e3-47a1-a589-0738ba9fec3d-kube-api-access-dpl4g\") pod \"nova-metadata-0\" (UID: \"948e5a03-94e3-47a1-a589-0738ba9fec3d\") " pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.359357 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.653803 4823 generic.go:334] "Generic (PLEG): container finished" podID="56f3c039-ed21-4a16-a877-757cfff7e8b9" containerID="da1cefe1163f457fd2e50604f0809f5046e7e32a4eaa9a58b4e6bcb63c371c88" exitCode=0 Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.653975 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"56f3c039-ed21-4a16-a877-757cfff7e8b9","Type":"ContainerDied","Data":"da1cefe1163f457fd2e50604f0809f5046e7e32a4eaa9a58b4e6bcb63c371c88"} Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.757757 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.852116 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f3c039-ed21-4a16-a877-757cfff7e8b9-config-data\") pod \"56f3c039-ed21-4a16-a877-757cfff7e8b9\" (UID: \"56f3c039-ed21-4a16-a877-757cfff7e8b9\") " Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.852387 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f3c039-ed21-4a16-a877-757cfff7e8b9-combined-ca-bundle\") pod \"56f3c039-ed21-4a16-a877-757cfff7e8b9\" (UID: \"56f3c039-ed21-4a16-a877-757cfff7e8b9\") " Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.852459 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4nr9\" (UniqueName: \"kubernetes.io/projected/56f3c039-ed21-4a16-a877-757cfff7e8b9-kube-api-access-t4nr9\") pod \"56f3c039-ed21-4a16-a877-757cfff7e8b9\" (UID: \"56f3c039-ed21-4a16-a877-757cfff7e8b9\") " Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.859083 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f3c039-ed21-4a16-a877-757cfff7e8b9-kube-api-access-t4nr9" (OuterVolumeSpecName: "kube-api-access-t4nr9") pod "56f3c039-ed21-4a16-a877-757cfff7e8b9" (UID: "56f3c039-ed21-4a16-a877-757cfff7e8b9"). InnerVolumeSpecName "kube-api-access-t4nr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.884268 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f3c039-ed21-4a16-a877-757cfff7e8b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "56f3c039-ed21-4a16-a877-757cfff7e8b9" (UID: "56f3c039-ed21-4a16-a877-757cfff7e8b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.893129 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f3c039-ed21-4a16-a877-757cfff7e8b9-config-data" (OuterVolumeSpecName: "config-data") pod "56f3c039-ed21-4a16-a877-757cfff7e8b9" (UID: "56f3c039-ed21-4a16-a877-757cfff7e8b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.920890 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.955140 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56f3c039-ed21-4a16-a877-757cfff7e8b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.955179 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56f3c039-ed21-4a16-a877-757cfff7e8b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:52 crc kubenswrapper[4823]: I0126 15:09:52.955207 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4nr9\" (UniqueName: \"kubernetes.io/projected/56f3c039-ed21-4a16-a877-757cfff7e8b9-kube-api-access-t4nr9\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.575780 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="193cf951-14a6-4175-95a9-e832702f5576" path="/var/lib/kubelet/pods/193cf951-14a6-4175-95a9-e832702f5576/volumes" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.667980 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"948e5a03-94e3-47a1-a589-0738ba9fec3d","Type":"ContainerStarted","Data":"b5d7a205a96031416b18a2a5aca863a55d138c9d96a146dcb93bb09c2af48cc5"} Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.669117 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"948e5a03-94e3-47a1-a589-0738ba9fec3d","Type":"ContainerStarted","Data":"2e54edbdbaa0974a7573066a7ec30e504e1fd34a34e54b2818506ae8a88df900"} Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.669190 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"948e5a03-94e3-47a1-a589-0738ba9fec3d","Type":"ContainerStarted","Data":"eeaf2b3d7b5c5cb58932e62166f9ef6406dcdd4c6f0d6602c445056edc410a47"} Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.677116 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"56f3c039-ed21-4a16-a877-757cfff7e8b9","Type":"ContainerDied","Data":"7ec29ac2d65082c904cae7d6038c5214e411b1e7d084971aa153d8c446e43847"} Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.677209 4823 scope.go:117] "RemoveContainer" containerID="da1cefe1163f457fd2e50604f0809f5046e7e32a4eaa9a58b4e6bcb63c371c88" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.677218 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.681175 4823 generic.go:334] "Generic (PLEG): container finished" podID="a1154b81-8b33-4af5-af58-b81ee4657c1e" containerID="cffa217ccc50d370619647ad0d153f771048807f0c8af80b4c910be8fe0ca577" exitCode=0 Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.681213 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1154b81-8b33-4af5-af58-b81ee4657c1e","Type":"ContainerDied","Data":"cffa217ccc50d370619647ad0d153f771048807f0c8af80b4c910be8fe0ca577"} Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.681234 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a1154b81-8b33-4af5-af58-b81ee4657c1e","Type":"ContainerDied","Data":"c2463c52636b9a52ea66d6242e2bc2abb592fa28c76c70e77a7d8ca2d30a0c66"} Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.681250 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2463c52636b9a52ea66d6242e2bc2abb592fa28c76c70e77a7d8ca2d30a0c66" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.704965 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.704933921 podStartE2EDuration="2.704933921s" podCreationTimestamp="2026-01-26 15:09:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:09:53.689412989 +0000 UTC m=+1390.374876094" watchObservedRunningTime="2026-01-26 15:09:53.704933921 +0000 UTC m=+1390.390397026" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.728448 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.749696 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.761812 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.769740 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.769907 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-public-tls-certs\") pod \"a1154b81-8b33-4af5-af58-b81ee4657c1e\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.769956 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l42wg\" (UniqueName: \"kubernetes.io/projected/a1154b81-8b33-4af5-af58-b81ee4657c1e-kube-api-access-l42wg\") pod \"a1154b81-8b33-4af5-af58-b81ee4657c1e\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.770019 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-internal-tls-certs\") pod \"a1154b81-8b33-4af5-af58-b81ee4657c1e\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.770102 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-combined-ca-bundle\") pod \"a1154b81-8b33-4af5-af58-b81ee4657c1e\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " Jan 26 15:09:53 crc kubenswrapper[4823]: E0126 15:09:53.770130 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f3c039-ed21-4a16-a877-757cfff7e8b9" containerName="nova-scheduler-scheduler" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.770136 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-config-data\") pod \"a1154b81-8b33-4af5-af58-b81ee4657c1e\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.770144 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f3c039-ed21-4a16-a877-757cfff7e8b9" containerName="nova-scheduler-scheduler" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.770165 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1154b81-8b33-4af5-af58-b81ee4657c1e-logs\") pod \"a1154b81-8b33-4af5-af58-b81ee4657c1e\" (UID: \"a1154b81-8b33-4af5-af58-b81ee4657c1e\") " Jan 26 15:09:53 crc kubenswrapper[4823]: E0126 15:09:53.770177 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1154b81-8b33-4af5-af58-b81ee4657c1e" containerName="nova-api-log" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.770191 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1154b81-8b33-4af5-af58-b81ee4657c1e" containerName="nova-api-log" Jan 26 15:09:53 crc kubenswrapper[4823]: E0126 15:09:53.770208 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1154b81-8b33-4af5-af58-b81ee4657c1e" containerName="nova-api-api" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.770215 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1154b81-8b33-4af5-af58-b81ee4657c1e" containerName="nova-api-api" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.770399 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1154b81-8b33-4af5-af58-b81ee4657c1e" containerName="nova-api-api" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.770415 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1154b81-8b33-4af5-af58-b81ee4657c1e" containerName="nova-api-log" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.770427 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f3c039-ed21-4a16-a877-757cfff7e8b9" containerName="nova-scheduler-scheduler" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.771030 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.771690 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1154b81-8b33-4af5-af58-b81ee4657c1e-logs" (OuterVolumeSpecName: "logs") pod "a1154b81-8b33-4af5-af58-b81ee4657c1e" (UID: "a1154b81-8b33-4af5-af58-b81ee4657c1e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.782083 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1154b81-8b33-4af5-af58-b81ee4657c1e-kube-api-access-l42wg" (OuterVolumeSpecName: "kube-api-access-l42wg") pod "a1154b81-8b33-4af5-af58-b81ee4657c1e" (UID: "a1154b81-8b33-4af5-af58-b81ee4657c1e"). InnerVolumeSpecName "kube-api-access-l42wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.782357 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.811711 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.837436 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-config-data" (OuterVolumeSpecName: "config-data") pod "a1154b81-8b33-4af5-af58-b81ee4657c1e" (UID: "a1154b81-8b33-4af5-af58-b81ee4657c1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.838577 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1154b81-8b33-4af5-af58-b81ee4657c1e" (UID: "a1154b81-8b33-4af5-af58-b81ee4657c1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.860342 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a1154b81-8b33-4af5-af58-b81ee4657c1e" (UID: "a1154b81-8b33-4af5-af58-b81ee4657c1e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.863206 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a1154b81-8b33-4af5-af58-b81ee4657c1e" (UID: "a1154b81-8b33-4af5-af58-b81ee4657c1e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.873198 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d532c0da-749a-4f0c-8157-b79e71b715ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d532c0da-749a-4f0c-8157-b79e71b715ac\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.873608 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgqrh\" (UniqueName: \"kubernetes.io/projected/d532c0da-749a-4f0c-8157-b79e71b715ac-kube-api-access-rgqrh\") pod \"nova-scheduler-0\" (UID: \"d532c0da-749a-4f0c-8157-b79e71b715ac\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.873937 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d532c0da-749a-4f0c-8157-b79e71b715ac-config-data\") pod \"nova-scheduler-0\" (UID: \"d532c0da-749a-4f0c-8157-b79e71b715ac\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.874144 4823 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.874192 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l42wg\" (UniqueName: \"kubernetes.io/projected/a1154b81-8b33-4af5-af58-b81ee4657c1e-kube-api-access-l42wg\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.874210 4823 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.874223 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.874235 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1154b81-8b33-4af5-af58-b81ee4657c1e-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.874248 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1154b81-8b33-4af5-af58-b81ee4657c1e-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.976270 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d532c0da-749a-4f0c-8157-b79e71b715ac-config-data\") pod \"nova-scheduler-0\" (UID: \"d532c0da-749a-4f0c-8157-b79e71b715ac\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.976356 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d532c0da-749a-4f0c-8157-b79e71b715ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d532c0da-749a-4f0c-8157-b79e71b715ac\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.976507 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgqrh\" (UniqueName: \"kubernetes.io/projected/d532c0da-749a-4f0c-8157-b79e71b715ac-kube-api-access-rgqrh\") pod \"nova-scheduler-0\" (UID: \"d532c0da-749a-4f0c-8157-b79e71b715ac\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.979936 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d532c0da-749a-4f0c-8157-b79e71b715ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d532c0da-749a-4f0c-8157-b79e71b715ac\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:53 crc kubenswrapper[4823]: I0126 15:09:53.980158 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d532c0da-749a-4f0c-8157-b79e71b715ac-config-data\") pod \"nova-scheduler-0\" (UID: \"d532c0da-749a-4f0c-8157-b79e71b715ac\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.006295 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgqrh\" (UniqueName: \"kubernetes.io/projected/d532c0da-749a-4f0c-8157-b79e71b715ac-kube-api-access-rgqrh\") pod \"nova-scheduler-0\" (UID: \"d532c0da-749a-4f0c-8157-b79e71b715ac\") " pod="openstack/nova-scheduler-0" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.137396 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.678826 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.693091 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d532c0da-749a-4f0c-8157-b79e71b715ac","Type":"ContainerStarted","Data":"e9a658edee7ca2e83c706855dcbd379a7ccedd6274f428c3b004437e050e1e95"} Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.693159 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.809598 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.826048 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.842417 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.843987 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.847029 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.847300 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.847543 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.848914 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.995268 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ca95455-aa3f-4fa1-a292-d3745005d671-logs\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.995494 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca95455-aa3f-4fa1-a292-d3745005d671-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.995532 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca95455-aa3f-4fa1-a292-d3745005d671-public-tls-certs\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.995579 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca95455-aa3f-4fa1-a292-d3745005d671-config-data\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.995614 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca95455-aa3f-4fa1-a292-d3745005d671-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:54 crc kubenswrapper[4823]: I0126 15:09:54.995718 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czr58\" (UniqueName: \"kubernetes.io/projected/8ca95455-aa3f-4fa1-a292-d3745005d671-kube-api-access-czr58\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.097210 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca95455-aa3f-4fa1-a292-d3745005d671-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.097276 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca95455-aa3f-4fa1-a292-d3745005d671-public-tls-certs\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.097326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca95455-aa3f-4fa1-a292-d3745005d671-config-data\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.097346 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca95455-aa3f-4fa1-a292-d3745005d671-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.097415 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czr58\" (UniqueName: \"kubernetes.io/projected/8ca95455-aa3f-4fa1-a292-d3745005d671-kube-api-access-czr58\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.097453 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ca95455-aa3f-4fa1-a292-d3745005d671-logs\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.098131 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ca95455-aa3f-4fa1-a292-d3745005d671-logs\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.104777 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca95455-aa3f-4fa1-a292-d3745005d671-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.105431 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ca95455-aa3f-4fa1-a292-d3745005d671-config-data\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.110069 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ca95455-aa3f-4fa1-a292-d3745005d671-public-tls-certs\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.110135 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ca95455-aa3f-4fa1-a292-d3745005d671-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.123298 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czr58\" (UniqueName: \"kubernetes.io/projected/8ca95455-aa3f-4fa1-a292-d3745005d671-kube-api-access-czr58\") pod \"nova-api-0\" (UID: \"8ca95455-aa3f-4fa1-a292-d3745005d671\") " pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.164790 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.572891 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f3c039-ed21-4a16-a877-757cfff7e8b9" path="/var/lib/kubelet/pods/56f3c039-ed21-4a16-a877-757cfff7e8b9/volumes" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.575425 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1154b81-8b33-4af5-af58-b81ee4657c1e" path="/var/lib/kubelet/pods/a1154b81-8b33-4af5-af58-b81ee4657c1e/volumes" Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.637175 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 26 15:09:55 crc kubenswrapper[4823]: W0126 15:09:55.647171 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ca95455_aa3f_4fa1_a292_d3745005d671.slice/crio-1a8a87f8b141890d91a9283b1a244c3e08a600b52cf4921efe8a23cadfa78e35 WatchSource:0}: Error finding container 1a8a87f8b141890d91a9283b1a244c3e08a600b52cf4921efe8a23cadfa78e35: Status 404 returned error can't find the container with id 1a8a87f8b141890d91a9283b1a244c3e08a600b52cf4921efe8a23cadfa78e35 Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.728432 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d532c0da-749a-4f0c-8157-b79e71b715ac","Type":"ContainerStarted","Data":"408f9fbce815f2532fb923d60176ab2fc013e0fdc6b00c3fa9ed1f5dae751796"} Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.732849 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ca95455-aa3f-4fa1-a292-d3745005d671","Type":"ContainerStarted","Data":"1a8a87f8b141890d91a9283b1a244c3e08a600b52cf4921efe8a23cadfa78e35"} Jan 26 15:09:55 crc kubenswrapper[4823]: I0126 15:09:55.762795 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.762771125 podStartE2EDuration="2.762771125s" podCreationTimestamp="2026-01-26 15:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:09:55.747439568 +0000 UTC m=+1392.432902683" watchObservedRunningTime="2026-01-26 15:09:55.762771125 +0000 UTC m=+1392.448234230" Jan 26 15:09:56 crc kubenswrapper[4823]: I0126 15:09:56.282010 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="193cf951-14a6-4175-95a9-e832702f5576" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 15:09:56 crc kubenswrapper[4823]: I0126 15:09:56.282936 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="193cf951-14a6-4175-95a9-e832702f5576" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.183:8775/\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 15:09:56 crc kubenswrapper[4823]: I0126 15:09:56.751889 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ca95455-aa3f-4fa1-a292-d3745005d671","Type":"ContainerStarted","Data":"e299b18ae89f3588240033cf93c0fe2db59384e20399a58995f900fb1d32f6d5"} Jan 26 15:09:56 crc kubenswrapper[4823]: I0126 15:09:56.752314 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ca95455-aa3f-4fa1-a292-d3745005d671","Type":"ContainerStarted","Data":"2194f7ae74dc330a4df88187745c700acb8091588f0d4f51461b92c0d8e7fba1"} Jan 26 15:09:56 crc kubenswrapper[4823]: I0126 15:09:56.788079 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.788057173 podStartE2EDuration="2.788057173s" podCreationTimestamp="2026-01-26 15:09:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:09:56.773660612 +0000 UTC m=+1393.459123707" watchObservedRunningTime="2026-01-26 15:09:56.788057173 +0000 UTC m=+1393.473520278" Jan 26 15:09:57 crc kubenswrapper[4823]: I0126 15:09:57.360057 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 15:09:57 crc kubenswrapper[4823]: I0126 15:09:57.360123 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 26 15:09:59 crc kubenswrapper[4823]: I0126 15:09:59.137990 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 26 15:10:02 crc kubenswrapper[4823]: I0126 15:10:02.360314 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 15:10:02 crc kubenswrapper[4823]: I0126 15:10:02.360805 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 26 15:10:03 crc kubenswrapper[4823]: I0126 15:10:03.374535 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="948e5a03-94e3-47a1-a589-0738ba9fec3d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 15:10:03 crc kubenswrapper[4823]: I0126 15:10:03.374551 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="948e5a03-94e3-47a1-a589-0738ba9fec3d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.193:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:10:04 crc kubenswrapper[4823]: I0126 15:10:04.137872 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 26 15:10:04 crc kubenswrapper[4823]: I0126 15:10:04.166779 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 26 15:10:04 crc kubenswrapper[4823]: I0126 15:10:04.508665 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:10:04 crc kubenswrapper[4823]: I0126 15:10:04.508755 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:10:04 crc kubenswrapper[4823]: I0126 15:10:04.869860 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 26 15:10:05 crc kubenswrapper[4823]: I0126 15:10:05.165742 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:10:05 crc kubenswrapper[4823]: I0126 15:10:05.166266 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 26 15:10:06 crc kubenswrapper[4823]: I0126 15:10:06.178539 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8ca95455-aa3f-4fa1-a292-d3745005d671" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.195:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:10:06 crc kubenswrapper[4823]: I0126 15:10:06.178590 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8ca95455-aa3f-4fa1-a292-d3745005d671" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.195:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 15:10:11 crc kubenswrapper[4823]: I0126 15:10:11.009673 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 15:10:12 crc kubenswrapper[4823]: I0126 15:10:12.376168 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 15:10:12 crc kubenswrapper[4823]: I0126 15:10:12.378220 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 26 15:10:12 crc kubenswrapper[4823]: I0126 15:10:12.390062 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 15:10:12 crc kubenswrapper[4823]: I0126 15:10:12.925354 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 26 15:10:15 crc kubenswrapper[4823]: I0126 15:10:15.174696 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 15:10:15 crc kubenswrapper[4823]: I0126 15:10:15.175487 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 15:10:15 crc kubenswrapper[4823]: I0126 15:10:15.175983 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 26 15:10:15 crc kubenswrapper[4823]: I0126 15:10:15.183078 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 15:10:15 crc kubenswrapper[4823]: I0126 15:10:15.942430 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 26 15:10:15 crc kubenswrapper[4823]: I0126 15:10:15.949095 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 26 15:10:24 crc kubenswrapper[4823]: I0126 15:10:24.632751 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:10:25 crc kubenswrapper[4823]: I0126 15:10:25.723795 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:10:29 crc kubenswrapper[4823]: I0126 15:10:29.345719 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="c43c52fb-3ef3-4d3e-984d-642a9bc09469" containerName="rabbitmq" containerID="cri-o://ed59d9bf4c7e8e5a1a8e23c753100b50e0bc2d0528d6eae294a01d96973d87b8" gracePeriod=604796 Jan 26 15:10:30 crc kubenswrapper[4823]: I0126 15:10:30.098189 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a82c17e1-38ac-4448-b3ff-b18df77c521b" containerName="rabbitmq" containerID="cri-o://9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df" gracePeriod=604796 Jan 26 15:10:34 crc kubenswrapper[4823]: I0126 15:10:34.507949 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:10:34 crc kubenswrapper[4823]: I0126 15:10:34.508718 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:10:34 crc kubenswrapper[4823]: I0126 15:10:34.508824 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 15:10:34 crc kubenswrapper[4823]: I0126 15:10:34.509874 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5873fe7ad32e2369de7d83b599dba09a2b10db32679ec89fa6711c86f67ecbb2"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:10:34 crc kubenswrapper[4823]: I0126 15:10:34.509967 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://5873fe7ad32e2369de7d83b599dba09a2b10db32679ec89fa6711c86f67ecbb2" gracePeriod=600 Jan 26 15:10:35 crc kubenswrapper[4823]: I0126 15:10:35.144066 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="5873fe7ad32e2369de7d83b599dba09a2b10db32679ec89fa6711c86f67ecbb2" exitCode=0 Jan 26 15:10:35 crc kubenswrapper[4823]: I0126 15:10:35.144132 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"5873fe7ad32e2369de7d83b599dba09a2b10db32679ec89fa6711c86f67ecbb2"} Jan 26 15:10:35 crc kubenswrapper[4823]: I0126 15:10:35.144768 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123"} Jan 26 15:10:35 crc kubenswrapper[4823]: I0126 15:10:35.144794 4823 scope.go:117] "RemoveContainer" containerID="6349657ac17d7db3f38b64f373c6e824084e4cf157cbb0ce8765094b3f648c48" Jan 26 15:10:35 crc kubenswrapper[4823]: I0126 15:10:35.675441 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="c43c52fb-3ef3-4d3e-984d-642a9bc09469" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 26 15:10:36 crc kubenswrapper[4823]: I0126 15:10:36.079718 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a82c17e1-38ac-4448-b3ff-b18df77c521b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.174768 4823 generic.go:334] "Generic (PLEG): container finished" podID="c43c52fb-3ef3-4d3e-984d-642a9bc09469" containerID="ed59d9bf4c7e8e5a1a8e23c753100b50e0bc2d0528d6eae294a01d96973d87b8" exitCode=0 Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.174894 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c43c52fb-3ef3-4d3e-984d-642a9bc09469","Type":"ContainerDied","Data":"ed59d9bf4c7e8e5a1a8e23c753100b50e0bc2d0528d6eae294a01d96973d87b8"} Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.175435 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"c43c52fb-3ef3-4d3e-984d-642a9bc09469","Type":"ContainerDied","Data":"15a8b2464e7d1cd54b8221ecac2a2bbc9c768f330508648a9bf5df254f5739fb"} Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.175504 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15a8b2464e7d1cd54b8221ecac2a2bbc9c768f330508648a9bf5df254f5739fb" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.210919 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.376278 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c43c52fb-3ef3-4d3e-984d-642a9bc09469-pod-info\") pod \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.376405 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbngj\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-kube-api-access-jbngj\") pod \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.376474 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-plugins-conf\") pod \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.376517 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-plugins\") pod \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.376639 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-server-conf\") pod \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.376740 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.376791 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c43c52fb-3ef3-4d3e-984d-642a9bc09469-erlang-cookie-secret\") pod \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.376825 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-confd\") pod \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.377322 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c43c52fb-3ef3-4d3e-984d-642a9bc09469" (UID: "c43c52fb-3ef3-4d3e-984d-642a9bc09469"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.377621 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c43c52fb-3ef3-4d3e-984d-642a9bc09469" (UID: "c43c52fb-3ef3-4d3e-984d-642a9bc09469"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.376944 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-erlang-cookie\") pod \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.378550 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-config-data\") pod \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.378638 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-tls\") pod \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\" (UID: \"c43c52fb-3ef3-4d3e-984d-642a9bc09469\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.379608 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c43c52fb-3ef3-4d3e-984d-642a9bc09469" (UID: "c43c52fb-3ef3-4d3e-984d-642a9bc09469"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.379784 4823 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.379805 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.379819 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.390331 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "c43c52fb-3ef3-4d3e-984d-642a9bc09469" (UID: "c43c52fb-3ef3-4d3e-984d-642a9bc09469"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.390452 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-kube-api-access-jbngj" (OuterVolumeSpecName: "kube-api-access-jbngj") pod "c43c52fb-3ef3-4d3e-984d-642a9bc09469" (UID: "c43c52fb-3ef3-4d3e-984d-642a9bc09469"). InnerVolumeSpecName "kube-api-access-jbngj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.390331 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c43c52fb-3ef3-4d3e-984d-642a9bc09469-pod-info" (OuterVolumeSpecName: "pod-info") pod "c43c52fb-3ef3-4d3e-984d-642a9bc09469" (UID: "c43c52fb-3ef3-4d3e-984d-642a9bc09469"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.394523 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c43c52fb-3ef3-4d3e-984d-642a9bc09469-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c43c52fb-3ef3-4d3e-984d-642a9bc09469" (UID: "c43c52fb-3ef3-4d3e-984d-642a9bc09469"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.395504 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c43c52fb-3ef3-4d3e-984d-642a9bc09469" (UID: "c43c52fb-3ef3-4d3e-984d-642a9bc09469"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.423503 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-config-data" (OuterVolumeSpecName: "config-data") pod "c43c52fb-3ef3-4d3e-984d-642a9bc09469" (UID: "c43c52fb-3ef3-4d3e-984d-642a9bc09469"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.469373 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-server-conf" (OuterVolumeSpecName: "server-conf") pod "c43c52fb-3ef3-4d3e-984d-642a9bc09469" (UID: "c43c52fb-3ef3-4d3e-984d-642a9bc09469"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.481286 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.481341 4823 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c43c52fb-3ef3-4d3e-984d-642a9bc09469-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.481353 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.481434 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.481444 4823 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c43c52fb-3ef3-4d3e-984d-642a9bc09469-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.481454 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbngj\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-kube-api-access-jbngj\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.481466 4823 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c43c52fb-3ef3-4d3e-984d-642a9bc09469-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.501026 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c43c52fb-3ef3-4d3e-984d-642a9bc09469" (UID: "c43c52fb-3ef3-4d3e-984d-642a9bc09469"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.506037 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.585101 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c43c52fb-3ef3-4d3e-984d-642a9bc09469-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.585129 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.635990 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.788044 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-server-conf\") pod \"a82c17e1-38ac-4448-b3ff-b18df77c521b\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.788707 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-plugins\") pod \"a82c17e1-38ac-4448-b3ff-b18df77c521b\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.788751 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-erlang-cookie\") pod \"a82c17e1-38ac-4448-b3ff-b18df77c521b\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.788858 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z85zf\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-kube-api-access-z85zf\") pod \"a82c17e1-38ac-4448-b3ff-b18df77c521b\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.788906 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a82c17e1-38ac-4448-b3ff-b18df77c521b-pod-info\") pod \"a82c17e1-38ac-4448-b3ff-b18df77c521b\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.788929 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-tls\") pod \"a82c17e1-38ac-4448-b3ff-b18df77c521b\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.788985 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-confd\") pod \"a82c17e1-38ac-4448-b3ff-b18df77c521b\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.789041 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-plugins-conf\") pod \"a82c17e1-38ac-4448-b3ff-b18df77c521b\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.789086 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a82c17e1-38ac-4448-b3ff-b18df77c521b-erlang-cookie-secret\") pod \"a82c17e1-38ac-4448-b3ff-b18df77c521b\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.789127 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-config-data\") pod \"a82c17e1-38ac-4448-b3ff-b18df77c521b\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.789151 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"a82c17e1-38ac-4448-b3ff-b18df77c521b\" (UID: \"a82c17e1-38ac-4448-b3ff-b18df77c521b\") " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.789757 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a82c17e1-38ac-4448-b3ff-b18df77c521b" (UID: "a82c17e1-38ac-4448-b3ff-b18df77c521b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.790081 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a82c17e1-38ac-4448-b3ff-b18df77c521b" (UID: "a82c17e1-38ac-4448-b3ff-b18df77c521b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.789915 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a82c17e1-38ac-4448-b3ff-b18df77c521b" (UID: "a82c17e1-38ac-4448-b3ff-b18df77c521b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.794319 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "persistence") pod "a82c17e1-38ac-4448-b3ff-b18df77c521b" (UID: "a82c17e1-38ac-4448-b3ff-b18df77c521b"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.794431 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-kube-api-access-z85zf" (OuterVolumeSpecName: "kube-api-access-z85zf") pod "a82c17e1-38ac-4448-b3ff-b18df77c521b" (UID: "a82c17e1-38ac-4448-b3ff-b18df77c521b"). InnerVolumeSpecName "kube-api-access-z85zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.795432 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a82c17e1-38ac-4448-b3ff-b18df77c521b-pod-info" (OuterVolumeSpecName: "pod-info") pod "a82c17e1-38ac-4448-b3ff-b18df77c521b" (UID: "a82c17e1-38ac-4448-b3ff-b18df77c521b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.795456 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a82c17e1-38ac-4448-b3ff-b18df77c521b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a82c17e1-38ac-4448-b3ff-b18df77c521b" (UID: "a82c17e1-38ac-4448-b3ff-b18df77c521b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.796002 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a82c17e1-38ac-4448-b3ff-b18df77c521b" (UID: "a82c17e1-38ac-4448-b3ff-b18df77c521b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.834120 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-config-data" (OuterVolumeSpecName: "config-data") pod "a82c17e1-38ac-4448-b3ff-b18df77c521b" (UID: "a82c17e1-38ac-4448-b3ff-b18df77c521b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.846674 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-server-conf" (OuterVolumeSpecName: "server-conf") pod "a82c17e1-38ac-4448-b3ff-b18df77c521b" (UID: "a82c17e1-38ac-4448-b3ff-b18df77c521b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.887993 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a82c17e1-38ac-4448-b3ff-b18df77c521b" (UID: "a82c17e1-38ac-4448-b3ff-b18df77c521b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.891347 4823 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-server-conf\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.891460 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.891479 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.891493 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z85zf\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-kube-api-access-z85zf\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.891507 4823 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a82c17e1-38ac-4448-b3ff-b18df77c521b-pod-info\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.891518 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.891528 4823 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a82c17e1-38ac-4448-b3ff-b18df77c521b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.891539 4823 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.891551 4823 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a82c17e1-38ac-4448-b3ff-b18df77c521b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.891562 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a82c17e1-38ac-4448-b3ff-b18df77c521b-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.891607 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.913450 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 26 15:10:37 crc kubenswrapper[4823]: I0126 15:10:37.993219 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.187451 4823 generic.go:334] "Generic (PLEG): container finished" podID="a82c17e1-38ac-4448-b3ff-b18df77c521b" containerID="9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df" exitCode=0 Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.187536 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.187557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a82c17e1-38ac-4448-b3ff-b18df77c521b","Type":"ContainerDied","Data":"9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df"} Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.187594 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a82c17e1-38ac-4448-b3ff-b18df77c521b","Type":"ContainerDied","Data":"9d9a06e749ccfa1a2ed4c1d154370b2c31c7b75b39d5f07a17e57287595998b1"} Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.187615 4823 scope.go:117] "RemoveContainer" containerID="9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.187536 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.245642 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.268653 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.280472 4823 scope.go:117] "RemoveContainer" containerID="4b2034ce41d61eb076d22a82c04cc9cf553fcbec011d783b6bb86deedba49bf3" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.289022 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.309936 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.324780 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:10:38 crc kubenswrapper[4823]: E0126 15:10:38.325342 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43c52fb-3ef3-4d3e-984d-642a9bc09469" containerName="rabbitmq" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.325800 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43c52fb-3ef3-4d3e-984d-642a9bc09469" containerName="rabbitmq" Jan 26 15:10:38 crc kubenswrapper[4823]: E0126 15:10:38.325839 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82c17e1-38ac-4448-b3ff-b18df77c521b" containerName="setup-container" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.325848 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82c17e1-38ac-4448-b3ff-b18df77c521b" containerName="setup-container" Jan 26 15:10:38 crc kubenswrapper[4823]: E0126 15:10:38.325871 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c43c52fb-3ef3-4d3e-984d-642a9bc09469" containerName="setup-container" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.325879 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c43c52fb-3ef3-4d3e-984d-642a9bc09469" containerName="setup-container" Jan 26 15:10:38 crc kubenswrapper[4823]: E0126 15:10:38.325890 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a82c17e1-38ac-4448-b3ff-b18df77c521b" containerName="rabbitmq" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.325898 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a82c17e1-38ac-4448-b3ff-b18df77c521b" containerName="rabbitmq" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.326119 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c43c52fb-3ef3-4d3e-984d-642a9bc09469" containerName="rabbitmq" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.326141 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a82c17e1-38ac-4448-b3ff-b18df77c521b" containerName="rabbitmq" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.327699 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.330954 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.331660 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.331760 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.331831 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.331904 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.331967 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.332304 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-q2xzp" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.338182 4823 scope.go:117] "RemoveContainer" containerID="9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.338345 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:10:38 crc kubenswrapper[4823]: E0126 15:10:38.339279 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df\": container with ID starting with 9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df not found: ID does not exist" containerID="9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.339464 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df"} err="failed to get container status \"9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df\": rpc error: code = NotFound desc = could not find container \"9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df\": container with ID starting with 9c496220e6585f5066e3532f6c98f1c743727cab805bc0fbd1a86bf8de4e30df not found: ID does not exist" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.339734 4823 scope.go:117] "RemoveContainer" containerID="4b2034ce41d61eb076d22a82c04cc9cf553fcbec011d783b6bb86deedba49bf3" Jan 26 15:10:38 crc kubenswrapper[4823]: E0126 15:10:38.340077 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b2034ce41d61eb076d22a82c04cc9cf553fcbec011d783b6bb86deedba49bf3\": container with ID starting with 4b2034ce41d61eb076d22a82c04cc9cf553fcbec011d783b6bb86deedba49bf3 not found: ID does not exist" containerID="4b2034ce41d61eb076d22a82c04cc9cf553fcbec011d783b6bb86deedba49bf3" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.340194 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b2034ce41d61eb076d22a82c04cc9cf553fcbec011d783b6bb86deedba49bf3"} err="failed to get container status \"4b2034ce41d61eb076d22a82c04cc9cf553fcbec011d783b6bb86deedba49bf3\": rpc error: code = NotFound desc = could not find container \"4b2034ce41d61eb076d22a82c04cc9cf553fcbec011d783b6bb86deedba49bf3\": container with ID starting with 4b2034ce41d61eb076d22a82c04cc9cf553fcbec011d783b6bb86deedba49bf3 not found: ID does not exist" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.341089 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.346706 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.346969 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.347116 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.347239 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.348181 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.348469 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rjxvp" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.350926 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.358955 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.376579 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.399945 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.399996 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400026 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400062 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c89e518a-a264-4196-97f5-4614a0b2d59d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400086 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c89e518a-a264-4196-97f5-4614a0b2d59d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400101 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwgmb\" (UniqueName: \"kubernetes.io/projected/c89e518a-a264-4196-97f5-4614a0b2d59d-kube-api-access-qwgmb\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400117 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400139 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400161 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c89e518a-a264-4196-97f5-4614a0b2d59d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400180 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c89e518a-a264-4196-97f5-4614a0b2d59d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400195 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-config-data\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400216 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c89e518a-a264-4196-97f5-4614a0b2d59d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400243 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c89e518a-a264-4196-97f5-4614a0b2d59d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400257 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400277 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400295 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbcfk\" (UniqueName: \"kubernetes.io/projected/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-kube-api-access-nbcfk\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400310 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c89e518a-a264-4196-97f5-4614a0b2d59d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400324 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400347 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400460 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c89e518a-a264-4196-97f5-4614a0b2d59d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400487 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.400765 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c89e518a-a264-4196-97f5-4614a0b2d59d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502104 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502167 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502189 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbcfk\" (UniqueName: \"kubernetes.io/projected/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-kube-api-access-nbcfk\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502205 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c89e518a-a264-4196-97f5-4614a0b2d59d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502223 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502247 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502265 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c89e518a-a264-4196-97f5-4614a0b2d59d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502286 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502317 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c89e518a-a264-4196-97f5-4614a0b2d59d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502353 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502402 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502426 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502456 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c89e518a-a264-4196-97f5-4614a0b2d59d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502484 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c89e518a-a264-4196-97f5-4614a0b2d59d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502500 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwgmb\" (UniqueName: \"kubernetes.io/projected/c89e518a-a264-4196-97f5-4614a0b2d59d-kube-api-access-qwgmb\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502513 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502537 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502560 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c89e518a-a264-4196-97f5-4614a0b2d59d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502578 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c89e518a-a264-4196-97f5-4614a0b2d59d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502593 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-config-data\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502613 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c89e518a-a264-4196-97f5-4614a0b2d59d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.502642 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c89e518a-a264-4196-97f5-4614a0b2d59d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.503427 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c89e518a-a264-4196-97f5-4614a0b2d59d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.503550 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.503693 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c89e518a-a264-4196-97f5-4614a0b2d59d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.503775 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c89e518a-a264-4196-97f5-4614a0b2d59d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.505643 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.505702 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.505725 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.505825 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.506401 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.506690 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-config-data\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.507108 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c89e518a-a264-4196-97f5-4614a0b2d59d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.507901 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c89e518a-a264-4196-97f5-4614a0b2d59d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.508765 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c89e518a-a264-4196-97f5-4614a0b2d59d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.509444 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c89e518a-a264-4196-97f5-4614a0b2d59d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.509580 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.512128 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.514863 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c89e518a-a264-4196-97f5-4614a0b2d59d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.515060 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.515348 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.523323 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c89e518a-a264-4196-97f5-4614a0b2d59d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.524512 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbcfk\" (UniqueName: \"kubernetes.io/projected/a38fdbff-2641-41d8-9b9b-ad6fe2fd9147-kube-api-access-nbcfk\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.527469 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwgmb\" (UniqueName: \"kubernetes.io/projected/c89e518a-a264-4196-97f5-4614a0b2d59d-kube-api-access-qwgmb\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.543556 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c89e518a-a264-4196-97f5-4614a0b2d59d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.551908 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147\") " pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.671638 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.684705 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:10:38 crc kubenswrapper[4823]: I0126 15:10:38.948639 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 26 15:10:38 crc kubenswrapper[4823]: W0126 15:10:38.955513 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda38fdbff_2641_41d8_9b9b_ad6fe2fd9147.slice/crio-ded2cf2cf66c52030abbac5a51822d0f40f39133bef36d3f5ec0d830985ce879 WatchSource:0}: Error finding container ded2cf2cf66c52030abbac5a51822d0f40f39133bef36d3f5ec0d830985ce879: Status 404 returned error can't find the container with id ded2cf2cf66c52030abbac5a51822d0f40f39133bef36d3f5ec0d830985ce879 Jan 26 15:10:39 crc kubenswrapper[4823]: I0126 15:10:39.200251 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 26 15:10:39 crc kubenswrapper[4823]: I0126 15:10:39.202259 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147","Type":"ContainerStarted","Data":"ded2cf2cf66c52030abbac5a51822d0f40f39133bef36d3f5ec0d830985ce879"} Jan 26 15:10:39 crc kubenswrapper[4823]: W0126 15:10:39.203809 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc89e518a_a264_4196_97f5_4614a0b2d59d.slice/crio-f0ad9f8f1ba454e1cc34cbde13ef51898d87cfe70365a8ffdbbdae154b49d851 WatchSource:0}: Error finding container f0ad9f8f1ba454e1cc34cbde13ef51898d87cfe70365a8ffdbbdae154b49d851: Status 404 returned error can't find the container with id f0ad9f8f1ba454e1cc34cbde13ef51898d87cfe70365a8ffdbbdae154b49d851 Jan 26 15:10:39 crc kubenswrapper[4823]: I0126 15:10:39.573141 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a82c17e1-38ac-4448-b3ff-b18df77c521b" path="/var/lib/kubelet/pods/a82c17e1-38ac-4448-b3ff-b18df77c521b/volumes" Jan 26 15:10:39 crc kubenswrapper[4823]: I0126 15:10:39.574279 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c43c52fb-3ef3-4d3e-984d-642a9bc09469" path="/var/lib/kubelet/pods/c43c52fb-3ef3-4d3e-984d-642a9bc09469/volumes" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.217832 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c89e518a-a264-4196-97f5-4614a0b2d59d","Type":"ContainerStarted","Data":"f0ad9f8f1ba454e1cc34cbde13ef51898d87cfe70365a8ffdbbdae154b49d851"} Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.324879 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-xvkc9"] Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.326952 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.335417 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.375785 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-xvkc9"] Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.441642 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-config\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.441697 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfkv2\" (UniqueName: \"kubernetes.io/projected/7461b78c-bd91-4815-be93-bfdc8afce17e-kube-api-access-gfkv2\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.442078 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.442214 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-dns-svc\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.442652 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.442713 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.545105 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.545602 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-dns-svc\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.545694 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.545862 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.546010 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-config\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.546032 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfkv2\" (UniqueName: \"kubernetes.io/projected/7461b78c-bd91-4815-be93-bfdc8afce17e-kube-api-access-gfkv2\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.546503 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.546727 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-dns-svc\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.547096 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.547141 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.547600 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-config\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.574258 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfkv2\" (UniqueName: \"kubernetes.io/projected/7461b78c-bd91-4815-be93-bfdc8afce17e-kube-api-access-gfkv2\") pod \"dnsmasq-dns-578b8d767c-xvkc9\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:40 crc kubenswrapper[4823]: I0126 15:10:40.680412 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:41 crc kubenswrapper[4823]: I0126 15:10:41.229295 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147","Type":"ContainerStarted","Data":"19aca4619745ff716daa4e79d3a0bdfdc119b35b63d9cb55e22f166e0f2aaac5"} Jan 26 15:10:41 crc kubenswrapper[4823]: I0126 15:10:41.231197 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c89e518a-a264-4196-97f5-4614a0b2d59d","Type":"ContainerStarted","Data":"65cae0ad6ac02aa7c5ce47d622ffe7292514378d8db79a569374dcc551b80285"} Jan 26 15:10:41 crc kubenswrapper[4823]: I0126 15:10:41.396415 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-xvkc9"] Jan 26 15:10:42 crc kubenswrapper[4823]: I0126 15:10:42.243554 4823 generic.go:334] "Generic (PLEG): container finished" podID="7461b78c-bd91-4815-be93-bfdc8afce17e" containerID="74132f63e408d54484f6886c35bc5bbbd8efa382e89d8115a7bdfb5b8bdd880c" exitCode=0 Jan 26 15:10:42 crc kubenswrapper[4823]: I0126 15:10:42.243697 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" event={"ID":"7461b78c-bd91-4815-be93-bfdc8afce17e","Type":"ContainerDied","Data":"74132f63e408d54484f6886c35bc5bbbd8efa382e89d8115a7bdfb5b8bdd880c"} Jan 26 15:10:42 crc kubenswrapper[4823]: I0126 15:10:42.244074 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" event={"ID":"7461b78c-bd91-4815-be93-bfdc8afce17e","Type":"ContainerStarted","Data":"b16ca8cc140a391ba9fa2cdca705acdfdd2b68fa4f893dd4016c7a6a4e5f71c9"} Jan 26 15:10:43 crc kubenswrapper[4823]: I0126 15:10:43.254389 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" event={"ID":"7461b78c-bd91-4815-be93-bfdc8afce17e","Type":"ContainerStarted","Data":"16a6294794752e0dbda4359a881458fc341901292da9c2c5336b9d73b7bb162a"} Jan 26 15:10:43 crc kubenswrapper[4823]: I0126 15:10:43.255684 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:43 crc kubenswrapper[4823]: I0126 15:10:43.282869 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" podStartSLOduration=3.282847669 podStartE2EDuration="3.282847669s" podCreationTimestamp="2026-01-26 15:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:10:43.275724806 +0000 UTC m=+1439.961187911" watchObservedRunningTime="2026-01-26 15:10:43.282847669 +0000 UTC m=+1439.968310774" Jan 26 15:10:50 crc kubenswrapper[4823]: I0126 15:10:50.683667 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:10:50 crc kubenswrapper[4823]: I0126 15:10:50.757467 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-n7lkg"] Jan 26 15:10:50 crc kubenswrapper[4823]: I0126 15:10:50.757759 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" podUID="ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" containerName="dnsmasq-dns" containerID="cri-o://07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197" gracePeriod=10 Jan 26 15:10:50 crc kubenswrapper[4823]: I0126 15:10:50.911784 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zkxk8"] Jan 26 15:10:50 crc kubenswrapper[4823]: I0126 15:10:50.914250 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:50 crc kubenswrapper[4823]: I0126 15:10:50.939247 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zkxk8"] Jan 26 15:10:50 crc kubenswrapper[4823]: I0126 15:10:50.999549 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:50.999961 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.000057 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.000089 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5sqv\" (UniqueName: \"kubernetes.io/projected/974996f7-bcd0-44da-8861-8d44792fe2b1-kube-api-access-z5sqv\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.000198 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-config\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.000226 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.102329 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.102421 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.102440 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5sqv\" (UniqueName: \"kubernetes.io/projected/974996f7-bcd0-44da-8861-8d44792fe2b1-kube-api-access-z5sqv\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.102484 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-config\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.102500 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.102556 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.111826 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-ovsdbserver-sb\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.113221 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-openstack-edpm-ipam\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.113250 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-ovsdbserver-nb\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.113407 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-dns-svc\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.113914 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-config\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.141677 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5sqv\" (UniqueName: \"kubernetes.io/projected/974996f7-bcd0-44da-8861-8d44792fe2b1-kube-api-access-z5sqv\") pod \"dnsmasq-dns-fbc59fbb7-zkxk8\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.288009 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.310182 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.377681 4823 generic.go:334] "Generic (PLEG): container finished" podID="ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" containerID="07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197" exitCode=0 Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.377733 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" event={"ID":"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf","Type":"ContainerDied","Data":"07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197"} Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.377769 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" event={"ID":"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf","Type":"ContainerDied","Data":"781fb6818c7dbbe7fd6d53be8c00a6158664289749e640f826bc50b1dd53606e"} Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.377792 4823 scope.go:117] "RemoveContainer" containerID="07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.378000 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-n7lkg" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.407187 4823 scope.go:117] "RemoveContainer" containerID="6320f3075b4a5e8c1fe393acfb978b3fd63f38303e6e83b82ed9c9e6e4c4c24c" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.408010 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-dns-svc\") pod \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.408162 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pfvs\" (UniqueName: \"kubernetes.io/projected/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-kube-api-access-2pfvs\") pod \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.408296 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-config\") pod \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.408339 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-ovsdbserver-nb\") pod \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.408783 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-ovsdbserver-sb\") pod \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\" (UID: \"ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf\") " Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.414242 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-kube-api-access-2pfvs" (OuterVolumeSpecName: "kube-api-access-2pfvs") pod "ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" (UID: "ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf"). InnerVolumeSpecName "kube-api-access-2pfvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.458399 4823 scope.go:117] "RemoveContainer" containerID="07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197" Jan 26 15:10:51 crc kubenswrapper[4823]: E0126 15:10:51.459139 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197\": container with ID starting with 07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197 not found: ID does not exist" containerID="07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.459249 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197"} err="failed to get container status \"07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197\": rpc error: code = NotFound desc = could not find container \"07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197\": container with ID starting with 07d83c3b9a52af5ad1fc0d63ea0d83357a5fdfd024aad27afe9b5a803c155197 not found: ID does not exist" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.459317 4823 scope.go:117] "RemoveContainer" containerID="6320f3075b4a5e8c1fe393acfb978b3fd63f38303e6e83b82ed9c9e6e4c4c24c" Jan 26 15:10:51 crc kubenswrapper[4823]: E0126 15:10:51.461076 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6320f3075b4a5e8c1fe393acfb978b3fd63f38303e6e83b82ed9c9e6e4c4c24c\": container with ID starting with 6320f3075b4a5e8c1fe393acfb978b3fd63f38303e6e83b82ed9c9e6e4c4c24c not found: ID does not exist" containerID="6320f3075b4a5e8c1fe393acfb978b3fd63f38303e6e83b82ed9c9e6e4c4c24c" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.461124 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6320f3075b4a5e8c1fe393acfb978b3fd63f38303e6e83b82ed9c9e6e4c4c24c"} err="failed to get container status \"6320f3075b4a5e8c1fe393acfb978b3fd63f38303e6e83b82ed9c9e6e4c4c24c\": rpc error: code = NotFound desc = could not find container \"6320f3075b4a5e8c1fe393acfb978b3fd63f38303e6e83b82ed9c9e6e4c4c24c\": container with ID starting with 6320f3075b4a5e8c1fe393acfb978b3fd63f38303e6e83b82ed9c9e6e4c4c24c not found: ID does not exist" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.474975 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" (UID: "ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.483916 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" (UID: "ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.485278 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-config" (OuterVolumeSpecName: "config") pod "ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" (UID: "ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.485647 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" (UID: "ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.510960 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.510991 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pfvs\" (UniqueName: \"kubernetes.io/projected/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-kube-api-access-2pfvs\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.511003 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.511011 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.511018 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.703139 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-n7lkg"] Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.714175 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-n7lkg"] Jan 26 15:10:51 crc kubenswrapper[4823]: I0126 15:10:51.799103 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zkxk8"] Jan 26 15:10:52 crc kubenswrapper[4823]: I0126 15:10:52.419128 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" event={"ID":"974996f7-bcd0-44da-8861-8d44792fe2b1","Type":"ContainerStarted","Data":"654c35ebc025d54b2553acab4c92a8dd74bb0e9bb456add9e16843e352646b7e"} Jan 26 15:10:52 crc kubenswrapper[4823]: I0126 15:10:52.419666 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" event={"ID":"974996f7-bcd0-44da-8861-8d44792fe2b1","Type":"ContainerStarted","Data":"7b1e0dbe92208bdf8cb0c3e3d4f97897fab86a372ee303bdf2f49ac464a07f3c"} Jan 26 15:10:53 crc kubenswrapper[4823]: I0126 15:10:53.430188 4823 generic.go:334] "Generic (PLEG): container finished" podID="974996f7-bcd0-44da-8861-8d44792fe2b1" containerID="654c35ebc025d54b2553acab4c92a8dd74bb0e9bb456add9e16843e352646b7e" exitCode=0 Jan 26 15:10:53 crc kubenswrapper[4823]: I0126 15:10:53.430341 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" event={"ID":"974996f7-bcd0-44da-8861-8d44792fe2b1","Type":"ContainerDied","Data":"654c35ebc025d54b2553acab4c92a8dd74bb0e9bb456add9e16843e352646b7e"} Jan 26 15:10:53 crc kubenswrapper[4823]: I0126 15:10:53.572723 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" path="/var/lib/kubelet/pods/ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf/volumes" Jan 26 15:10:54 crc kubenswrapper[4823]: I0126 15:10:54.445105 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" event={"ID":"974996f7-bcd0-44da-8861-8d44792fe2b1","Type":"ContainerStarted","Data":"49d280773794424c372a89d2ec9985e3ec5154a1d1096fb9fe1af5d65f97c189"} Jan 26 15:10:54 crc kubenswrapper[4823]: I0126 15:10:54.446439 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:10:54 crc kubenswrapper[4823]: I0126 15:10:54.466248 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" podStartSLOduration=4.466205272 podStartE2EDuration="4.466205272s" podCreationTimestamp="2026-01-26 15:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:10:54.463844099 +0000 UTC m=+1451.149307284" watchObservedRunningTime="2026-01-26 15:10:54.466205272 +0000 UTC m=+1451.151668367" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.242150 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9x6ck"] Jan 26 15:10:59 crc kubenswrapper[4823]: E0126 15:10:59.246609 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" containerName="dnsmasq-dns" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.246720 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" containerName="dnsmasq-dns" Jan 26 15:10:59 crc kubenswrapper[4823]: E0126 15:10:59.251562 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" containerName="init" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.251855 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" containerName="init" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.252308 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef112b6f-475d-4dc2-b94e-0a97c5bd5bbf" containerName="dnsmasq-dns" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.253879 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.261167 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9x6ck"] Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.449091 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-utilities\") pod \"redhat-operators-9x6ck\" (UID: \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\") " pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.449209 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc2mf\" (UniqueName: \"kubernetes.io/projected/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-kube-api-access-tc2mf\") pod \"redhat-operators-9x6ck\" (UID: \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\") " pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.449305 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-catalog-content\") pod \"redhat-operators-9x6ck\" (UID: \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\") " pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.550582 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-utilities\") pod \"redhat-operators-9x6ck\" (UID: \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\") " pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.550690 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc2mf\" (UniqueName: \"kubernetes.io/projected/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-kube-api-access-tc2mf\") pod \"redhat-operators-9x6ck\" (UID: \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\") " pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.550747 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-catalog-content\") pod \"redhat-operators-9x6ck\" (UID: \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\") " pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.551150 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-utilities\") pod \"redhat-operators-9x6ck\" (UID: \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\") " pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.551217 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-catalog-content\") pod \"redhat-operators-9x6ck\" (UID: \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\") " pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.580297 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc2mf\" (UniqueName: \"kubernetes.io/projected/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-kube-api-access-tc2mf\") pod \"redhat-operators-9x6ck\" (UID: \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\") " pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:10:59 crc kubenswrapper[4823]: I0126 15:10:59.583046 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:11:00 crc kubenswrapper[4823]: I0126 15:11:00.048682 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9x6ck"] Jan 26 15:11:00 crc kubenswrapper[4823]: I0126 15:11:00.511659 4823 generic.go:334] "Generic (PLEG): container finished" podID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" containerID="94a349296872230af3e1fe48b0917766ec080ad7fd31113750a3c70a2aa2a25c" exitCode=0 Jan 26 15:11:00 crc kubenswrapper[4823]: I0126 15:11:00.511719 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x6ck" event={"ID":"1ec2c604-b6dc-4338-b274-0e5a8063c5e4","Type":"ContainerDied","Data":"94a349296872230af3e1fe48b0917766ec080ad7fd31113750a3c70a2aa2a25c"} Jan 26 15:11:00 crc kubenswrapper[4823]: I0126 15:11:00.511774 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x6ck" event={"ID":"1ec2c604-b6dc-4338-b274-0e5a8063c5e4","Type":"ContainerStarted","Data":"dc2bbdfbfd131539dbbe42b8ade478117081863c73c2ef5481c3a2fdc1913c42"} Jan 26 15:11:01 crc kubenswrapper[4823]: I0126 15:11:01.289811 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:11:01 crc kubenswrapper[4823]: I0126 15:11:01.373215 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-xvkc9"] Jan 26 15:11:01 crc kubenswrapper[4823]: I0126 15:11:01.374005 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" podUID="7461b78c-bd91-4815-be93-bfdc8afce17e" containerName="dnsmasq-dns" containerID="cri-o://16a6294794752e0dbda4359a881458fc341901292da9c2c5336b9d73b7bb162a" gracePeriod=10 Jan 26 15:11:01 crc kubenswrapper[4823]: I0126 15:11:01.527441 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x6ck" event={"ID":"1ec2c604-b6dc-4338-b274-0e5a8063c5e4","Type":"ContainerStarted","Data":"914c0f967c2bf658547f33c1359c58eb10622d67a913a7616eea4df04f399e46"} Jan 26 15:11:01 crc kubenswrapper[4823]: I0126 15:11:01.530994 4823 generic.go:334] "Generic (PLEG): container finished" podID="7461b78c-bd91-4815-be93-bfdc8afce17e" containerID="16a6294794752e0dbda4359a881458fc341901292da9c2c5336b9d73b7bb162a" exitCode=0 Jan 26 15:11:01 crc kubenswrapper[4823]: I0126 15:11:01.531117 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" event={"ID":"7461b78c-bd91-4815-be93-bfdc8afce17e","Type":"ContainerDied","Data":"16a6294794752e0dbda4359a881458fc341901292da9c2c5336b9d73b7bb162a"} Jan 26 15:11:01 crc kubenswrapper[4823]: I0126 15:11:01.910503 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.112547 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-ovsdbserver-sb\") pod \"7461b78c-bd91-4815-be93-bfdc8afce17e\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.114293 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-dns-svc\") pod \"7461b78c-bd91-4815-be93-bfdc8afce17e\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.114671 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-openstack-edpm-ipam\") pod \"7461b78c-bd91-4815-be93-bfdc8afce17e\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.114846 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-config\") pod \"7461b78c-bd91-4815-be93-bfdc8afce17e\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.114978 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-ovsdbserver-nb\") pod \"7461b78c-bd91-4815-be93-bfdc8afce17e\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.115106 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfkv2\" (UniqueName: \"kubernetes.io/projected/7461b78c-bd91-4815-be93-bfdc8afce17e-kube-api-access-gfkv2\") pod \"7461b78c-bd91-4815-be93-bfdc8afce17e\" (UID: \"7461b78c-bd91-4815-be93-bfdc8afce17e\") " Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.121115 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7461b78c-bd91-4815-be93-bfdc8afce17e-kube-api-access-gfkv2" (OuterVolumeSpecName: "kube-api-access-gfkv2") pod "7461b78c-bd91-4815-be93-bfdc8afce17e" (UID: "7461b78c-bd91-4815-be93-bfdc8afce17e"). InnerVolumeSpecName "kube-api-access-gfkv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.175916 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7461b78c-bd91-4815-be93-bfdc8afce17e" (UID: "7461b78c-bd91-4815-be93-bfdc8afce17e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.177888 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7461b78c-bd91-4815-be93-bfdc8afce17e" (UID: "7461b78c-bd91-4815-be93-bfdc8afce17e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.180480 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7461b78c-bd91-4815-be93-bfdc8afce17e" (UID: "7461b78c-bd91-4815-be93-bfdc8afce17e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.182867 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-config" (OuterVolumeSpecName: "config") pod "7461b78c-bd91-4815-be93-bfdc8afce17e" (UID: "7461b78c-bd91-4815-be93-bfdc8afce17e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.189871 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "7461b78c-bd91-4815-be93-bfdc8afce17e" (UID: "7461b78c-bd91-4815-be93-bfdc8afce17e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.217337 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.217536 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.217556 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.217568 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.217581 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7461b78c-bd91-4815-be93-bfdc8afce17e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.217593 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfkv2\" (UniqueName: \"kubernetes.io/projected/7461b78c-bd91-4815-be93-bfdc8afce17e-kube-api-access-gfkv2\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.542163 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" event={"ID":"7461b78c-bd91-4815-be93-bfdc8afce17e","Type":"ContainerDied","Data":"b16ca8cc140a391ba9fa2cdca705acdfdd2b68fa4f893dd4016c7a6a4e5f71c9"} Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.542229 4823 scope.go:117] "RemoveContainer" containerID="16a6294794752e0dbda4359a881458fc341901292da9c2c5336b9d73b7bb162a" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.542195 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-xvkc9" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.562464 4823 scope.go:117] "RemoveContainer" containerID="74132f63e408d54484f6886c35bc5bbbd8efa382e89d8115a7bdfb5b8bdd880c" Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.584704 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-xvkc9"] Jan 26 15:11:02 crc kubenswrapper[4823]: I0126 15:11:02.601376 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-xvkc9"] Jan 26 15:11:03 crc kubenswrapper[4823]: I0126 15:11:03.554162 4823 generic.go:334] "Generic (PLEG): container finished" podID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" containerID="914c0f967c2bf658547f33c1359c58eb10622d67a913a7616eea4df04f399e46" exitCode=0 Jan 26 15:11:03 crc kubenswrapper[4823]: I0126 15:11:03.554220 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x6ck" event={"ID":"1ec2c604-b6dc-4338-b274-0e5a8063c5e4","Type":"ContainerDied","Data":"914c0f967c2bf658547f33c1359c58eb10622d67a913a7616eea4df04f399e46"} Jan 26 15:11:03 crc kubenswrapper[4823]: I0126 15:11:03.575549 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7461b78c-bd91-4815-be93-bfdc8afce17e" path="/var/lib/kubelet/pods/7461b78c-bd91-4815-be93-bfdc8afce17e/volumes" Jan 26 15:11:06 crc kubenswrapper[4823]: I0126 15:11:06.609031 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x6ck" event={"ID":"1ec2c604-b6dc-4338-b274-0e5a8063c5e4","Type":"ContainerStarted","Data":"b7c05fb0c8f85d373c43cd1ffea2f9c3f75580b24068d676f325a08415309b6f"} Jan 26 15:11:06 crc kubenswrapper[4823]: I0126 15:11:06.641975 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9x6ck" podStartSLOduration=2.341656401 podStartE2EDuration="7.64194773s" podCreationTimestamp="2026-01-26 15:10:59 +0000 UTC" firstStartedPulling="2026-01-26 15:11:00.513857952 +0000 UTC m=+1457.199321067" lastFinishedPulling="2026-01-26 15:11:05.814149291 +0000 UTC m=+1462.499612396" observedRunningTime="2026-01-26 15:11:06.63312065 +0000 UTC m=+1463.318583775" watchObservedRunningTime="2026-01-26 15:11:06.64194773 +0000 UTC m=+1463.327410835" Jan 26 15:11:09 crc kubenswrapper[4823]: I0126 15:11:09.584239 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:11:09 crc kubenswrapper[4823]: I0126 15:11:09.584699 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:11:10 crc kubenswrapper[4823]: I0126 15:11:10.632952 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9x6ck" podUID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" containerName="registry-server" probeResult="failure" output=< Jan 26 15:11:10 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 15:11:10 crc kubenswrapper[4823]: > Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.600645 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k"] Jan 26 15:11:11 crc kubenswrapper[4823]: E0126 15:11:11.602248 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7461b78c-bd91-4815-be93-bfdc8afce17e" containerName="dnsmasq-dns" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.602282 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7461b78c-bd91-4815-be93-bfdc8afce17e" containerName="dnsmasq-dns" Jan 26 15:11:11 crc kubenswrapper[4823]: E0126 15:11:11.602325 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7461b78c-bd91-4815-be93-bfdc8afce17e" containerName="init" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.602340 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7461b78c-bd91-4815-be93-bfdc8afce17e" containerName="init" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.602649 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7461b78c-bd91-4815-be93-bfdc8afce17e" containerName="dnsmasq-dns" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.603862 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.606721 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.606992 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.607291 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.612352 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.618234 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k"] Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.717009 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6856k\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.717572 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw58x\" (UniqueName: \"kubernetes.io/projected/69c2cec8-efd8-4432-8c31-bd77a00d4792-kube-api-access-pw58x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6856k\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.717612 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6856k\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.717673 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6856k\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.819589 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6856k\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.819675 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw58x\" (UniqueName: \"kubernetes.io/projected/69c2cec8-efd8-4432-8c31-bd77a00d4792-kube-api-access-pw58x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6856k\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.819703 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6856k\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.819742 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6856k\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.830096 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6856k\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.830108 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6856k\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.831717 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6856k\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.840015 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw58x\" (UniqueName: \"kubernetes.io/projected/69c2cec8-efd8-4432-8c31-bd77a00d4792-kube-api-access-pw58x\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-6856k\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:11 crc kubenswrapper[4823]: I0126 15:11:11.935123 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:12 crc kubenswrapper[4823]: I0126 15:11:12.534822 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k"] Jan 26 15:11:12 crc kubenswrapper[4823]: W0126 15:11:12.545105 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69c2cec8_efd8_4432_8c31_bd77a00d4792.slice/crio-b93bccb50ba02825d3bfffdaf784a5f64c5eb817716b14396548de4f5826e8ed WatchSource:0}: Error finding container b93bccb50ba02825d3bfffdaf784a5f64c5eb817716b14396548de4f5826e8ed: Status 404 returned error can't find the container with id b93bccb50ba02825d3bfffdaf784a5f64c5eb817716b14396548de4f5826e8ed Jan 26 15:11:12 crc kubenswrapper[4823]: I0126 15:11:12.669267 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" event={"ID":"69c2cec8-efd8-4432-8c31-bd77a00d4792","Type":"ContainerStarted","Data":"b93bccb50ba02825d3bfffdaf784a5f64c5eb817716b14396548de4f5826e8ed"} Jan 26 15:11:13 crc kubenswrapper[4823]: I0126 15:11:13.694073 4823 generic.go:334] "Generic (PLEG): container finished" podID="a38fdbff-2641-41d8-9b9b-ad6fe2fd9147" containerID="19aca4619745ff716daa4e79d3a0bdfdc119b35b63d9cb55e22f166e0f2aaac5" exitCode=0 Jan 26 15:11:13 crc kubenswrapper[4823]: I0126 15:11:13.694206 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147","Type":"ContainerDied","Data":"19aca4619745ff716daa4e79d3a0bdfdc119b35b63d9cb55e22f166e0f2aaac5"} Jan 26 15:11:13 crc kubenswrapper[4823]: I0126 15:11:13.700038 4823 generic.go:334] "Generic (PLEG): container finished" podID="c89e518a-a264-4196-97f5-4614a0b2d59d" containerID="65cae0ad6ac02aa7c5ce47d622ffe7292514378d8db79a569374dcc551b80285" exitCode=0 Jan 26 15:11:13 crc kubenswrapper[4823]: I0126 15:11:13.700109 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c89e518a-a264-4196-97f5-4614a0b2d59d","Type":"ContainerDied","Data":"65cae0ad6ac02aa7c5ce47d622ffe7292514378d8db79a569374dcc551b80285"} Jan 26 15:11:14 crc kubenswrapper[4823]: I0126 15:11:14.735068 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c89e518a-a264-4196-97f5-4614a0b2d59d","Type":"ContainerStarted","Data":"63476c126a1a272fa7c7a5753fa963290dc7251fc2eefc5234355ace6f9c9a3c"} Jan 26 15:11:14 crc kubenswrapper[4823]: I0126 15:11:14.736176 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:11:14 crc kubenswrapper[4823]: I0126 15:11:14.739104 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a38fdbff-2641-41d8-9b9b-ad6fe2fd9147","Type":"ContainerStarted","Data":"a896a25734f12540b7fa5233d350ec80b38bd62f726233ccbbb7e2f69dbd57a3"} Jan 26 15:11:14 crc kubenswrapper[4823]: I0126 15:11:14.740263 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 26 15:11:14 crc kubenswrapper[4823]: I0126 15:11:14.763301 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.763269644 podStartE2EDuration="36.763269644s" podCreationTimestamp="2026-01-26 15:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:11:14.757914788 +0000 UTC m=+1471.443377913" watchObservedRunningTime="2026-01-26 15:11:14.763269644 +0000 UTC m=+1471.448732749" Jan 26 15:11:14 crc kubenswrapper[4823]: I0126 15:11:14.790094 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.790072312 podStartE2EDuration="36.790072312s" podCreationTimestamp="2026-01-26 15:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:11:14.784592163 +0000 UTC m=+1471.470055268" watchObservedRunningTime="2026-01-26 15:11:14.790072312 +0000 UTC m=+1471.475535417" Jan 26 15:11:19 crc kubenswrapper[4823]: I0126 15:11:19.648240 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:11:19 crc kubenswrapper[4823]: I0126 15:11:19.718031 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:11:19 crc kubenswrapper[4823]: I0126 15:11:19.894430 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9x6ck"] Jan 26 15:11:20 crc kubenswrapper[4823]: I0126 15:11:20.809935 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9x6ck" podUID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" containerName="registry-server" containerID="cri-o://b7c05fb0c8f85d373c43cd1ffea2f9c3f75580b24068d676f325a08415309b6f" gracePeriod=2 Jan 26 15:11:21 crc kubenswrapper[4823]: I0126 15:11:21.823511 4823 generic.go:334] "Generic (PLEG): container finished" podID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" containerID="b7c05fb0c8f85d373c43cd1ffea2f9c3f75580b24068d676f325a08415309b6f" exitCode=0 Jan 26 15:11:21 crc kubenswrapper[4823]: I0126 15:11:21.823571 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x6ck" event={"ID":"1ec2c604-b6dc-4338-b274-0e5a8063c5e4","Type":"ContainerDied","Data":"b7c05fb0c8f85d373c43cd1ffea2f9c3f75580b24068d676f325a08415309b6f"} Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.608834 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.718204 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-utilities\") pod \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\" (UID: \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\") " Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.718559 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc2mf\" (UniqueName: \"kubernetes.io/projected/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-kube-api-access-tc2mf\") pod \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\" (UID: \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\") " Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.719179 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-catalog-content\") pod \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\" (UID: \"1ec2c604-b6dc-4338-b274-0e5a8063c5e4\") " Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.722793 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-utilities" (OuterVolumeSpecName: "utilities") pod "1ec2c604-b6dc-4338-b274-0e5a8063c5e4" (UID: "1ec2c604-b6dc-4338-b274-0e5a8063c5e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.726518 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-kube-api-access-tc2mf" (OuterVolumeSpecName: "kube-api-access-tc2mf") pod "1ec2c604-b6dc-4338-b274-0e5a8063c5e4" (UID: "1ec2c604-b6dc-4338-b274-0e5a8063c5e4"). InnerVolumeSpecName "kube-api-access-tc2mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.821840 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc2mf\" (UniqueName: \"kubernetes.io/projected/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-kube-api-access-tc2mf\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.821917 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.849086 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" event={"ID":"69c2cec8-efd8-4432-8c31-bd77a00d4792","Type":"ContainerStarted","Data":"0a0d7552456b5c37c4e529b68cb55d5080d7026208629a27d2524f32b1ba5dd1"} Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.852467 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9x6ck" event={"ID":"1ec2c604-b6dc-4338-b274-0e5a8063c5e4","Type":"ContainerDied","Data":"dc2bbdfbfd131539dbbe42b8ade478117081863c73c2ef5481c3a2fdc1913c42"} Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.852514 4823 scope.go:117] "RemoveContainer" containerID="b7c05fb0c8f85d373c43cd1ffea2f9c3f75580b24068d676f325a08415309b6f" Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.852663 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9x6ck" Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.880816 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ec2c604-b6dc-4338-b274-0e5a8063c5e4" (UID: "1ec2c604-b6dc-4338-b274-0e5a8063c5e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.881337 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" podStartSLOduration=2.131001811 podStartE2EDuration="12.88131132s" podCreationTimestamp="2026-01-26 15:11:11 +0000 UTC" firstStartedPulling="2026-01-26 15:11:12.548217064 +0000 UTC m=+1469.233680169" lastFinishedPulling="2026-01-26 15:11:23.298526573 +0000 UTC m=+1479.983989678" observedRunningTime="2026-01-26 15:11:23.867138474 +0000 UTC m=+1480.552601589" watchObservedRunningTime="2026-01-26 15:11:23.88131132 +0000 UTC m=+1480.566774415" Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.882513 4823 scope.go:117] "RemoveContainer" containerID="914c0f967c2bf658547f33c1359c58eb10622d67a913a7616eea4df04f399e46" Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.915443 4823 scope.go:117] "RemoveContainer" containerID="94a349296872230af3e1fe48b0917766ec080ad7fd31113750a3c70a2aa2a25c" Jan 26 15:11:23 crc kubenswrapper[4823]: I0126 15:11:23.924189 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ec2c604-b6dc-4338-b274-0e5a8063c5e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:24 crc kubenswrapper[4823]: I0126 15:11:24.203160 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9x6ck"] Jan 26 15:11:24 crc kubenswrapper[4823]: I0126 15:11:24.214796 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9x6ck"] Jan 26 15:11:25 crc kubenswrapper[4823]: I0126 15:11:25.573627 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" path="/var/lib/kubelet/pods/1ec2c604-b6dc-4338-b274-0e5a8063c5e4/volumes" Jan 26 15:11:28 crc kubenswrapper[4823]: I0126 15:11:28.676604 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 26 15:11:28 crc kubenswrapper[4823]: I0126 15:11:28.688495 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 26 15:11:35 crc kubenswrapper[4823]: I0126 15:11:35.969583 4823 generic.go:334] "Generic (PLEG): container finished" podID="69c2cec8-efd8-4432-8c31-bd77a00d4792" containerID="0a0d7552456b5c37c4e529b68cb55d5080d7026208629a27d2524f32b1ba5dd1" exitCode=0 Jan 26 15:11:35 crc kubenswrapper[4823]: I0126 15:11:35.969674 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" event={"ID":"69c2cec8-efd8-4432-8c31-bd77a00d4792","Type":"ContainerDied","Data":"0a0d7552456b5c37c4e529b68cb55d5080d7026208629a27d2524f32b1ba5dd1"} Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.508485 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.599946 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-inventory\") pod \"69c2cec8-efd8-4432-8c31-bd77a00d4792\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.600090 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw58x\" (UniqueName: \"kubernetes.io/projected/69c2cec8-efd8-4432-8c31-bd77a00d4792-kube-api-access-pw58x\") pod \"69c2cec8-efd8-4432-8c31-bd77a00d4792\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.600193 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-repo-setup-combined-ca-bundle\") pod \"69c2cec8-efd8-4432-8c31-bd77a00d4792\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.600261 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-ssh-key-openstack-edpm-ipam\") pod \"69c2cec8-efd8-4432-8c31-bd77a00d4792\" (UID: \"69c2cec8-efd8-4432-8c31-bd77a00d4792\") " Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.607356 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "69c2cec8-efd8-4432-8c31-bd77a00d4792" (UID: "69c2cec8-efd8-4432-8c31-bd77a00d4792"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.607784 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c2cec8-efd8-4432-8c31-bd77a00d4792-kube-api-access-pw58x" (OuterVolumeSpecName: "kube-api-access-pw58x") pod "69c2cec8-efd8-4432-8c31-bd77a00d4792" (UID: "69c2cec8-efd8-4432-8c31-bd77a00d4792"). InnerVolumeSpecName "kube-api-access-pw58x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.634273 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-inventory" (OuterVolumeSpecName: "inventory") pod "69c2cec8-efd8-4432-8c31-bd77a00d4792" (UID: "69c2cec8-efd8-4432-8c31-bd77a00d4792"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.639913 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "69c2cec8-efd8-4432-8c31-bd77a00d4792" (UID: "69c2cec8-efd8-4432-8c31-bd77a00d4792"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.702475 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw58x\" (UniqueName: \"kubernetes.io/projected/69c2cec8-efd8-4432-8c31-bd77a00d4792-kube-api-access-pw58x\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.702798 4823 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.702887 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.703087 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/69c2cec8-efd8-4432-8c31-bd77a00d4792-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.991864 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" event={"ID":"69c2cec8-efd8-4432-8c31-bd77a00d4792","Type":"ContainerDied","Data":"b93bccb50ba02825d3bfffdaf784a5f64c5eb817716b14396548de4f5826e8ed"} Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.992611 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b93bccb50ba02825d3bfffdaf784a5f64c5eb817716b14396548de4f5826e8ed" Jan 26 15:11:37 crc kubenswrapper[4823]: I0126 15:11:37.992582 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.105431 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x"] Jan 26 15:11:38 crc kubenswrapper[4823]: E0126 15:11:38.105910 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c2cec8-efd8-4432-8c31-bd77a00d4792" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.105935 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c2cec8-efd8-4432-8c31-bd77a00d4792" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 15:11:38 crc kubenswrapper[4823]: E0126 15:11:38.105952 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" containerName="extract-utilities" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.105962 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" containerName="extract-utilities" Jan 26 15:11:38 crc kubenswrapper[4823]: E0126 15:11:38.105977 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" containerName="extract-content" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.105985 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" containerName="extract-content" Jan 26 15:11:38 crc kubenswrapper[4823]: E0126 15:11:38.106023 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" containerName="registry-server" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.106031 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" containerName="registry-server" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.106245 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c2cec8-efd8-4432-8c31-bd77a00d4792" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.106310 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ec2c604-b6dc-4338-b274-0e5a8063c5e4" containerName="registry-server" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.107111 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.109417 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.109934 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.110579 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.110922 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.126948 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x"] Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.212459 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.212566 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.212638 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmlm4\" (UniqueName: \"kubernetes.io/projected/2b81a5da-2c44-44de-a3b3-a6ea31c16692-kube-api-access-rmlm4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.212940 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.315759 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.315927 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.315974 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.316002 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmlm4\" (UniqueName: \"kubernetes.io/projected/2b81a5da-2c44-44de-a3b3-a6ea31c16692-kube-api-access-rmlm4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.323931 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.324204 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.324836 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.335980 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmlm4\" (UniqueName: \"kubernetes.io/projected/2b81a5da-2c44-44de-a3b3-a6ea31c16692-kube-api-access-rmlm4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:38 crc kubenswrapper[4823]: I0126 15:11:38.426276 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:11:39 crc kubenswrapper[4823]: I0126 15:11:39.045752 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x"] Jan 26 15:11:40 crc kubenswrapper[4823]: I0126 15:11:40.011139 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" event={"ID":"2b81a5da-2c44-44de-a3b3-a6ea31c16692","Type":"ContainerStarted","Data":"2d6999a7faf24dec67888c748f4426581758926a769069e9606d95ec2193dd73"} Jan 26 15:11:41 crc kubenswrapper[4823]: I0126 15:11:41.020976 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" event={"ID":"2b81a5da-2c44-44de-a3b3-a6ea31c16692","Type":"ContainerStarted","Data":"4d27f7224de11fa16066738cc75486e47b5db147c51097501dda6dfcffb79067"} Jan 26 15:11:41 crc kubenswrapper[4823]: I0126 15:11:41.041873 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" podStartSLOduration=2.264344926 podStartE2EDuration="3.041849346s" podCreationTimestamp="2026-01-26 15:11:38 +0000 UTC" firstStartedPulling="2026-01-26 15:11:39.062523487 +0000 UTC m=+1495.747986592" lastFinishedPulling="2026-01-26 15:11:39.840027917 +0000 UTC m=+1496.525491012" observedRunningTime="2026-01-26 15:11:41.037839967 +0000 UTC m=+1497.723303072" watchObservedRunningTime="2026-01-26 15:11:41.041849346 +0000 UTC m=+1497.727312451" Jan 26 15:12:03 crc kubenswrapper[4823]: I0126 15:12:03.221964 4823 scope.go:117] "RemoveContainer" containerID="6e82e9deb99af2b3b870a2ed2a53db407735ae46e99c77c22fa41e3ca8b9f407" Jan 26 15:12:03 crc kubenswrapper[4823]: I0126 15:12:03.259468 4823 scope.go:117] "RemoveContainer" containerID="d324e4498e25c790364c529d8ff7c5a42be04ccc727f54417de05094a26b7b1f" Jan 26 15:12:03 crc kubenswrapper[4823]: I0126 15:12:03.285944 4823 scope.go:117] "RemoveContainer" containerID="4761f26d82e6fc4c9c9ced8d686425fa4265970e598f982b1fe5d3e9d152304a" Jan 26 15:12:03 crc kubenswrapper[4823]: I0126 15:12:03.413819 4823 scope.go:117] "RemoveContainer" containerID="ed59d9bf4c7e8e5a1a8e23c753100b50e0bc2d0528d6eae294a01d96973d87b8" Jan 26 15:12:03 crc kubenswrapper[4823]: I0126 15:12:03.446097 4823 scope.go:117] "RemoveContainer" containerID="04bd6c70314d9629c420f6379a2420c34cc3d405d46c54841b21f2ec2e5089a5" Jan 26 15:12:34 crc kubenswrapper[4823]: I0126 15:12:34.508631 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:12:34 crc kubenswrapper[4823]: I0126 15:12:34.509307 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.334741 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t2xmr"] Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.338327 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.389913 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t2xmr"] Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.413102 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96gj4\" (UniqueName: \"kubernetes.io/projected/3c7e61f1-d18b-48f6-a644-bf611d667468-kube-api-access-96gj4\") pod \"community-operators-t2xmr\" (UID: \"3c7e61f1-d18b-48f6-a644-bf611d667468\") " pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.413577 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c7e61f1-d18b-48f6-a644-bf611d667468-catalog-content\") pod \"community-operators-t2xmr\" (UID: \"3c7e61f1-d18b-48f6-a644-bf611d667468\") " pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.413763 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c7e61f1-d18b-48f6-a644-bf611d667468-utilities\") pod \"community-operators-t2xmr\" (UID: \"3c7e61f1-d18b-48f6-a644-bf611d667468\") " pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.515787 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c7e61f1-d18b-48f6-a644-bf611d667468-catalog-content\") pod \"community-operators-t2xmr\" (UID: \"3c7e61f1-d18b-48f6-a644-bf611d667468\") " pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.516270 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c7e61f1-d18b-48f6-a644-bf611d667468-utilities\") pod \"community-operators-t2xmr\" (UID: \"3c7e61f1-d18b-48f6-a644-bf611d667468\") " pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.516469 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96gj4\" (UniqueName: \"kubernetes.io/projected/3c7e61f1-d18b-48f6-a644-bf611d667468-kube-api-access-96gj4\") pod \"community-operators-t2xmr\" (UID: \"3c7e61f1-d18b-48f6-a644-bf611d667468\") " pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.516595 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c7e61f1-d18b-48f6-a644-bf611d667468-catalog-content\") pod \"community-operators-t2xmr\" (UID: \"3c7e61f1-d18b-48f6-a644-bf611d667468\") " pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.516797 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c7e61f1-d18b-48f6-a644-bf611d667468-utilities\") pod \"community-operators-t2xmr\" (UID: \"3c7e61f1-d18b-48f6-a644-bf611d667468\") " pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.544438 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96gj4\" (UniqueName: \"kubernetes.io/projected/3c7e61f1-d18b-48f6-a644-bf611d667468-kube-api-access-96gj4\") pod \"community-operators-t2xmr\" (UID: \"3c7e61f1-d18b-48f6-a644-bf611d667468\") " pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:02 crc kubenswrapper[4823]: I0126 15:13:02.678145 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:03 crc kubenswrapper[4823]: I0126 15:13:03.233888 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t2xmr"] Jan 26 15:13:03 crc kubenswrapper[4823]: I0126 15:13:03.602417 4823 scope.go:117] "RemoveContainer" containerID="f51ddb6c216396b0a1e52278ec724444b12385fd1f9fff1815b49a839766b6a1" Jan 26 15:13:04 crc kubenswrapper[4823]: I0126 15:13:04.031666 4823 generic.go:334] "Generic (PLEG): container finished" podID="3c7e61f1-d18b-48f6-a644-bf611d667468" containerID="ac44f4a9798d28da6022bb85aa61d8ba7b7d61bd5580aad525cb7d3384ab2d43" exitCode=0 Jan 26 15:13:04 crc kubenswrapper[4823]: I0126 15:13:04.031768 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2xmr" event={"ID":"3c7e61f1-d18b-48f6-a644-bf611d667468","Type":"ContainerDied","Data":"ac44f4a9798d28da6022bb85aa61d8ba7b7d61bd5580aad525cb7d3384ab2d43"} Jan 26 15:13:04 crc kubenswrapper[4823]: I0126 15:13:04.032039 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2xmr" event={"ID":"3c7e61f1-d18b-48f6-a644-bf611d667468","Type":"ContainerStarted","Data":"4df6e4db0dd6539667531a82aa4bb6a572150b2a443ba0d92cb014d84f48e09b"} Jan 26 15:13:04 crc kubenswrapper[4823]: I0126 15:13:04.508377 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:13:04 crc kubenswrapper[4823]: I0126 15:13:04.508473 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:13:11 crc kubenswrapper[4823]: I0126 15:13:11.120808 4823 generic.go:334] "Generic (PLEG): container finished" podID="3c7e61f1-d18b-48f6-a644-bf611d667468" containerID="7f04da44bd7851ce7b9710ad9c4d99836ddc46fdac833854213f4486e272275c" exitCode=0 Jan 26 15:13:11 crc kubenswrapper[4823]: I0126 15:13:11.120879 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2xmr" event={"ID":"3c7e61f1-d18b-48f6-a644-bf611d667468","Type":"ContainerDied","Data":"7f04da44bd7851ce7b9710ad9c4d99836ddc46fdac833854213f4486e272275c"} Jan 26 15:13:13 crc kubenswrapper[4823]: I0126 15:13:13.144072 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t2xmr" event={"ID":"3c7e61f1-d18b-48f6-a644-bf611d667468","Type":"ContainerStarted","Data":"3a72ea230bd3c543f300ebd9133f720606b287e4cbdb51d974d335bb8b7d19d9"} Jan 26 15:13:13 crc kubenswrapper[4823]: I0126 15:13:13.180247 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t2xmr" podStartSLOduration=3.1596255109999998 podStartE2EDuration="11.180220737s" podCreationTimestamp="2026-01-26 15:13:02 +0000 UTC" firstStartedPulling="2026-01-26 15:13:04.034542859 +0000 UTC m=+1580.720005964" lastFinishedPulling="2026-01-26 15:13:12.055138085 +0000 UTC m=+1588.740601190" observedRunningTime="2026-01-26 15:13:13.165450354 +0000 UTC m=+1589.850913509" watchObservedRunningTime="2026-01-26 15:13:13.180220737 +0000 UTC m=+1589.865683842" Jan 26 15:13:22 crc kubenswrapper[4823]: I0126 15:13:22.679621 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:22 crc kubenswrapper[4823]: I0126 15:13:22.680787 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:22 crc kubenswrapper[4823]: I0126 15:13:22.732837 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:23 crc kubenswrapper[4823]: I0126 15:13:23.313428 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t2xmr" Jan 26 15:13:23 crc kubenswrapper[4823]: I0126 15:13:23.415868 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t2xmr"] Jan 26 15:13:23 crc kubenswrapper[4823]: I0126 15:13:23.452356 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k65lv"] Jan 26 15:13:23 crc kubenswrapper[4823]: I0126 15:13:23.452653 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k65lv" podUID="960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" containerName="registry-server" containerID="cri-o://8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff" gracePeriod=2 Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.013094 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k65lv" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.100468 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-utilities\") pod \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\" (UID: \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\") " Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.100585 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-catalog-content\") pod \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\" (UID: \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\") " Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.100623 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2scqn\" (UniqueName: \"kubernetes.io/projected/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-kube-api-access-2scqn\") pod \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\" (UID: \"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067\") " Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.102449 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-utilities" (OuterVolumeSpecName: "utilities") pod "960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" (UID: "960e8dd1-6e3a-43e5-8d5a-ce90e97bc067"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.107968 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-kube-api-access-2scqn" (OuterVolumeSpecName: "kube-api-access-2scqn") pod "960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" (UID: "960e8dd1-6e3a-43e5-8d5a-ce90e97bc067"). InnerVolumeSpecName "kube-api-access-2scqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.160726 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" (UID: "960e8dd1-6e3a-43e5-8d5a-ce90e97bc067"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.202534 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.202958 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2scqn\" (UniqueName: \"kubernetes.io/projected/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-kube-api-access-2scqn\") on node \"crc\" DevicePath \"\"" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.202972 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.275498 4823 generic.go:334] "Generic (PLEG): container finished" podID="960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" containerID="8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff" exitCode=0 Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.275559 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k65lv" event={"ID":"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067","Type":"ContainerDied","Data":"8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff"} Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.275627 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k65lv" event={"ID":"960e8dd1-6e3a-43e5-8d5a-ce90e97bc067","Type":"ContainerDied","Data":"b634d1feb876832bb2b46c06f7286cafa99e085e34b069bf3eb8cc2a0fa17699"} Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.275651 4823 scope.go:117] "RemoveContainer" containerID="8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.275582 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k65lv" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.315051 4823 scope.go:117] "RemoveContainer" containerID="15f877b7e7fef61406e867ab3a1400c95c0def9aeeb46a6e0519227177593c43" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.321432 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k65lv"] Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.332796 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k65lv"] Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.345148 4823 scope.go:117] "RemoveContainer" containerID="be560354847ebdf8d9ba7fc6fdbbd174006b239adb1b582912f4388eb367cb8c" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.390281 4823 scope.go:117] "RemoveContainer" containerID="8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff" Jan 26 15:13:24 crc kubenswrapper[4823]: E0126 15:13:24.390955 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff\": container with ID starting with 8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff not found: ID does not exist" containerID="8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.391071 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff"} err="failed to get container status \"8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff\": rpc error: code = NotFound desc = could not find container \"8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff\": container with ID starting with 8fc6d21d7725d86e7605009a30ca1945ed332b572a006476404ab22ca992ceff not found: ID does not exist" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.391163 4823 scope.go:117] "RemoveContainer" containerID="15f877b7e7fef61406e867ab3a1400c95c0def9aeeb46a6e0519227177593c43" Jan 26 15:13:24 crc kubenswrapper[4823]: E0126 15:13:24.391711 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15f877b7e7fef61406e867ab3a1400c95c0def9aeeb46a6e0519227177593c43\": container with ID starting with 15f877b7e7fef61406e867ab3a1400c95c0def9aeeb46a6e0519227177593c43 not found: ID does not exist" containerID="15f877b7e7fef61406e867ab3a1400c95c0def9aeeb46a6e0519227177593c43" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.391745 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15f877b7e7fef61406e867ab3a1400c95c0def9aeeb46a6e0519227177593c43"} err="failed to get container status \"15f877b7e7fef61406e867ab3a1400c95c0def9aeeb46a6e0519227177593c43\": rpc error: code = NotFound desc = could not find container \"15f877b7e7fef61406e867ab3a1400c95c0def9aeeb46a6e0519227177593c43\": container with ID starting with 15f877b7e7fef61406e867ab3a1400c95c0def9aeeb46a6e0519227177593c43 not found: ID does not exist" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.391767 4823 scope.go:117] "RemoveContainer" containerID="be560354847ebdf8d9ba7fc6fdbbd174006b239adb1b582912f4388eb367cb8c" Jan 26 15:13:24 crc kubenswrapper[4823]: E0126 15:13:24.392027 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be560354847ebdf8d9ba7fc6fdbbd174006b239adb1b582912f4388eb367cb8c\": container with ID starting with be560354847ebdf8d9ba7fc6fdbbd174006b239adb1b582912f4388eb367cb8c not found: ID does not exist" containerID="be560354847ebdf8d9ba7fc6fdbbd174006b239adb1b582912f4388eb367cb8c" Jan 26 15:13:24 crc kubenswrapper[4823]: I0126 15:13:24.392055 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be560354847ebdf8d9ba7fc6fdbbd174006b239adb1b582912f4388eb367cb8c"} err="failed to get container status \"be560354847ebdf8d9ba7fc6fdbbd174006b239adb1b582912f4388eb367cb8c\": rpc error: code = NotFound desc = could not find container \"be560354847ebdf8d9ba7fc6fdbbd174006b239adb1b582912f4388eb367cb8c\": container with ID starting with be560354847ebdf8d9ba7fc6fdbbd174006b239adb1b582912f4388eb367cb8c not found: ID does not exist" Jan 26 15:13:25 crc kubenswrapper[4823]: I0126 15:13:25.571666 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" path="/var/lib/kubelet/pods/960e8dd1-6e3a-43e5-8d5a-ce90e97bc067/volumes" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.783759 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q2245"] Jan 26 15:13:26 crc kubenswrapper[4823]: E0126 15:13:26.784208 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" containerName="registry-server" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.784226 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" containerName="registry-server" Jan 26 15:13:26 crc kubenswrapper[4823]: E0126 15:13:26.784253 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" containerName="extract-content" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.784261 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" containerName="extract-content" Jan 26 15:13:26 crc kubenswrapper[4823]: E0126 15:13:26.784277 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" containerName="extract-utilities" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.784286 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" containerName="extract-utilities" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.784555 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="960e8dd1-6e3a-43e5-8d5a-ce90e97bc067" containerName="registry-server" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.786217 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.796339 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q2245"] Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.847862 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-utilities\") pod \"certified-operators-q2245\" (UID: \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\") " pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.847982 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2zcf\" (UniqueName: \"kubernetes.io/projected/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-kube-api-access-v2zcf\") pod \"certified-operators-q2245\" (UID: \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\") " pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.848007 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-catalog-content\") pod \"certified-operators-q2245\" (UID: \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\") " pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.949817 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2zcf\" (UniqueName: \"kubernetes.io/projected/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-kube-api-access-v2zcf\") pod \"certified-operators-q2245\" (UID: \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\") " pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.949896 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-catalog-content\") pod \"certified-operators-q2245\" (UID: \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\") " pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.950057 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-utilities\") pod \"certified-operators-q2245\" (UID: \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\") " pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.950716 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-catalog-content\") pod \"certified-operators-q2245\" (UID: \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\") " pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.950769 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-utilities\") pod \"certified-operators-q2245\" (UID: \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\") " pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:26 crc kubenswrapper[4823]: I0126 15:13:26.971440 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2zcf\" (UniqueName: \"kubernetes.io/projected/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-kube-api-access-v2zcf\") pod \"certified-operators-q2245\" (UID: \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\") " pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:27 crc kubenswrapper[4823]: I0126 15:13:27.115229 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:27 crc kubenswrapper[4823]: I0126 15:13:27.673710 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q2245"] Jan 26 15:13:28 crc kubenswrapper[4823]: I0126 15:13:28.334886 4823 generic.go:334] "Generic (PLEG): container finished" podID="0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" containerID="53c452e316752338e0697a4039fb62873768562ed1b505c5ed0b10fc56af77c4" exitCode=0 Jan 26 15:13:28 crc kubenswrapper[4823]: I0126 15:13:28.335445 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2245" event={"ID":"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7","Type":"ContainerDied","Data":"53c452e316752338e0697a4039fb62873768562ed1b505c5ed0b10fc56af77c4"} Jan 26 15:13:28 crc kubenswrapper[4823]: I0126 15:13:28.335567 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2245" event={"ID":"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7","Type":"ContainerStarted","Data":"cfc93bdcacb7fcdc8127ea5f2cb752176bc55b512ccfb39c89eaa495615ddc32"} Jan 26 15:13:34 crc kubenswrapper[4823]: I0126 15:13:34.508290 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:13:34 crc kubenswrapper[4823]: I0126 15:13:34.509108 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:13:34 crc kubenswrapper[4823]: I0126 15:13:34.509162 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 15:13:34 crc kubenswrapper[4823]: I0126 15:13:34.509947 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:13:34 crc kubenswrapper[4823]: I0126 15:13:34.509998 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" gracePeriod=600 Jan 26 15:13:34 crc kubenswrapper[4823]: E0126 15:13:34.711575 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:13:34 crc kubenswrapper[4823]: E0126 15:13:34.754976 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a3a166e_bc51_4f3e_baf7_9a9d3cd4e85d.slice/crio-conmon-9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:13:35 crc kubenswrapper[4823]: I0126 15:13:35.408260 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" exitCode=0 Jan 26 15:13:35 crc kubenswrapper[4823]: I0126 15:13:35.408322 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123"} Jan 26 15:13:35 crc kubenswrapper[4823]: I0126 15:13:35.408413 4823 scope.go:117] "RemoveContainer" containerID="5873fe7ad32e2369de7d83b599dba09a2b10db32679ec89fa6711c86f67ecbb2" Jan 26 15:13:35 crc kubenswrapper[4823]: I0126 15:13:35.409482 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:13:35 crc kubenswrapper[4823]: E0126 15:13:35.410044 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:13:36 crc kubenswrapper[4823]: I0126 15:13:36.429839 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2245" event={"ID":"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7","Type":"ContainerStarted","Data":"cbb2e29a2957277220e085d73396799ee5f6494f378c8e2b9cda93c800e35e80"} Jan 26 15:13:37 crc kubenswrapper[4823]: I0126 15:13:37.444458 4823 generic.go:334] "Generic (PLEG): container finished" podID="0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" containerID="cbb2e29a2957277220e085d73396799ee5f6494f378c8e2b9cda93c800e35e80" exitCode=0 Jan 26 15:13:37 crc kubenswrapper[4823]: I0126 15:13:37.444547 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2245" event={"ID":"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7","Type":"ContainerDied","Data":"cbb2e29a2957277220e085d73396799ee5f6494f378c8e2b9cda93c800e35e80"} Jan 26 15:13:40 crc kubenswrapper[4823]: I0126 15:13:40.478271 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2245" event={"ID":"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7","Type":"ContainerStarted","Data":"5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b"} Jan 26 15:13:40 crc kubenswrapper[4823]: I0126 15:13:40.499567 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q2245" podStartSLOduration=3.154167589 podStartE2EDuration="14.499547478s" podCreationTimestamp="2026-01-26 15:13:26 +0000 UTC" firstStartedPulling="2026-01-26 15:13:28.338679346 +0000 UTC m=+1605.024142451" lastFinishedPulling="2026-01-26 15:13:39.684059215 +0000 UTC m=+1616.369522340" observedRunningTime="2026-01-26 15:13:40.498392956 +0000 UTC m=+1617.183856061" watchObservedRunningTime="2026-01-26 15:13:40.499547478 +0000 UTC m=+1617.185010593" Jan 26 15:13:47 crc kubenswrapper[4823]: I0126 15:13:47.116507 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:47 crc kubenswrapper[4823]: I0126 15:13:47.117279 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:47 crc kubenswrapper[4823]: I0126 15:13:47.185184 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:47 crc kubenswrapper[4823]: I0126 15:13:47.560911 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:13:47 crc kubenswrapper[4823]: E0126 15:13:47.561557 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:13:47 crc kubenswrapper[4823]: I0126 15:13:47.607503 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q2245" Jan 26 15:13:47 crc kubenswrapper[4823]: I0126 15:13:47.693437 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q2245"] Jan 26 15:13:47 crc kubenswrapper[4823]: I0126 15:13:47.717288 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ttpzc"] Jan 26 15:13:47 crc kubenswrapper[4823]: I0126 15:13:47.717558 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ttpzc" podUID="d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" containerName="registry-server" containerID="cri-o://3185df8bfd366c7103e8286c0e283c93c0c0071fb14b428dae04054ab4e585fd" gracePeriod=2 Jan 26 15:13:48 crc kubenswrapper[4823]: I0126 15:13:48.582072 4823 generic.go:334] "Generic (PLEG): container finished" podID="d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" containerID="3185df8bfd366c7103e8286c0e283c93c0c0071fb14b428dae04054ab4e585fd" exitCode=0 Jan 26 15:13:48 crc kubenswrapper[4823]: I0126 15:13:48.582192 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttpzc" event={"ID":"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d","Type":"ContainerDied","Data":"3185df8bfd366c7103e8286c0e283c93c0c0071fb14b428dae04054ab4e585fd"} Jan 26 15:13:48 crc kubenswrapper[4823]: I0126 15:13:48.886489 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.021801 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkqg2\" (UniqueName: \"kubernetes.io/projected/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-kube-api-access-pkqg2\") pod \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\" (UID: \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\") " Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.022047 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-catalog-content\") pod \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\" (UID: \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\") " Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.022120 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-utilities\") pod \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\" (UID: \"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d\") " Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.023016 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-utilities" (OuterVolumeSpecName: "utilities") pod "d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" (UID: "d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.028004 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-kube-api-access-pkqg2" (OuterVolumeSpecName: "kube-api-access-pkqg2") pod "d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" (UID: "d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d"). InnerVolumeSpecName "kube-api-access-pkqg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.068225 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" (UID: "d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.124163 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.124202 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.124212 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkqg2\" (UniqueName: \"kubernetes.io/projected/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d-kube-api-access-pkqg2\") on node \"crc\" DevicePath \"\"" Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.598575 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ttpzc" Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.598665 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ttpzc" event={"ID":"d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d","Type":"ContainerDied","Data":"8659f24b6dcd0f00fc2a01b6b93b80c1f719c64aad920ae60088c1baf137d876"} Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.598741 4823 scope.go:117] "RemoveContainer" containerID="3185df8bfd366c7103e8286c0e283c93c0c0071fb14b428dae04054ab4e585fd" Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.656103 4823 scope.go:117] "RemoveContainer" containerID="45ede807e304b93e76243accbb58a6bd0de4c0d09f0b88d8d0d625e10260ed3b" Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.656726 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ttpzc"] Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.682142 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ttpzc"] Jan 26 15:13:49 crc kubenswrapper[4823]: I0126 15:13:49.704257 4823 scope.go:117] "RemoveContainer" containerID="b2f8f9297e364d873dba02c04de2ae94584ece9a68959085606a80a6723ace93" Jan 26 15:13:51 crc kubenswrapper[4823]: I0126 15:13:51.574746 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" path="/var/lib/kubelet/pods/d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d/volumes" Jan 26 15:14:00 crc kubenswrapper[4823]: I0126 15:14:00.561073 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:14:00 crc kubenswrapper[4823]: E0126 15:14:00.562128 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:14:03 crc kubenswrapper[4823]: I0126 15:14:03.701864 4823 scope.go:117] "RemoveContainer" containerID="3da3743610842f3fc1856404fe9012924932ea9a39688db0637f689dadc8255d" Jan 26 15:14:03 crc kubenswrapper[4823]: I0126 15:14:03.740804 4823 scope.go:117] "RemoveContainer" containerID="b238185c5a2a4b6a9ca0e56e9e5a331c3d903fd0d076829bd1be4df28216bfeb" Jan 26 15:14:15 crc kubenswrapper[4823]: I0126 15:14:15.560806 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:14:15 crc kubenswrapper[4823]: E0126 15:14:15.561851 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:14:28 crc kubenswrapper[4823]: I0126 15:14:28.560389 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:14:28 crc kubenswrapper[4823]: E0126 15:14:28.561310 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:14:43 crc kubenswrapper[4823]: I0126 15:14:43.571272 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:14:43 crc kubenswrapper[4823]: E0126 15:14:43.573781 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:14:52 crc kubenswrapper[4823]: I0126 15:14:52.659134 4823 generic.go:334] "Generic (PLEG): container finished" podID="2b81a5da-2c44-44de-a3b3-a6ea31c16692" containerID="4d27f7224de11fa16066738cc75486e47b5db147c51097501dda6dfcffb79067" exitCode=0 Jan 26 15:14:52 crc kubenswrapper[4823]: I0126 15:14:52.659373 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" event={"ID":"2b81a5da-2c44-44de-a3b3-a6ea31c16692","Type":"ContainerDied","Data":"4d27f7224de11fa16066738cc75486e47b5db147c51097501dda6dfcffb79067"} Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.193229 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.326562 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-bootstrap-combined-ca-bundle\") pod \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.326864 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmlm4\" (UniqueName: \"kubernetes.io/projected/2b81a5da-2c44-44de-a3b3-a6ea31c16692-kube-api-access-rmlm4\") pod \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.327099 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-ssh-key-openstack-edpm-ipam\") pod \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.327337 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-inventory\") pod \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\" (UID: \"2b81a5da-2c44-44de-a3b3-a6ea31c16692\") " Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.340108 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b81a5da-2c44-44de-a3b3-a6ea31c16692-kube-api-access-rmlm4" (OuterVolumeSpecName: "kube-api-access-rmlm4") pod "2b81a5da-2c44-44de-a3b3-a6ea31c16692" (UID: "2b81a5da-2c44-44de-a3b3-a6ea31c16692"). InnerVolumeSpecName "kube-api-access-rmlm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.340542 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2b81a5da-2c44-44de-a3b3-a6ea31c16692" (UID: "2b81a5da-2c44-44de-a3b3-a6ea31c16692"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.355909 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-inventory" (OuterVolumeSpecName: "inventory") pod "2b81a5da-2c44-44de-a3b3-a6ea31c16692" (UID: "2b81a5da-2c44-44de-a3b3-a6ea31c16692"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.359418 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b81a5da-2c44-44de-a3b3-a6ea31c16692" (UID: "2b81a5da-2c44-44de-a3b3-a6ea31c16692"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.430355 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.430683 4823 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.430928 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmlm4\" (UniqueName: \"kubernetes.io/projected/2b81a5da-2c44-44de-a3b3-a6ea31c16692-kube-api-access-rmlm4\") on node \"crc\" DevicePath \"\"" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.431099 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b81a5da-2c44-44de-a3b3-a6ea31c16692-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.687742 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" event={"ID":"2b81a5da-2c44-44de-a3b3-a6ea31c16692","Type":"ContainerDied","Data":"2d6999a7faf24dec67888c748f4426581758926a769069e9606d95ec2193dd73"} Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.688254 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d6999a7faf24dec67888c748f4426581758926a769069e9606d95ec2193dd73" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.687845 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.799877 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg"] Jan 26 15:14:54 crc kubenswrapper[4823]: E0126 15:14:54.800427 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" containerName="extract-content" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.800459 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" containerName="extract-content" Jan 26 15:14:54 crc kubenswrapper[4823]: E0126 15:14:54.800478 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" containerName="registry-server" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.800486 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" containerName="registry-server" Jan 26 15:14:54 crc kubenswrapper[4823]: E0126 15:14:54.800501 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" containerName="extract-utilities" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.800510 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" containerName="extract-utilities" Jan 26 15:14:54 crc kubenswrapper[4823]: E0126 15:14:54.800532 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b81a5da-2c44-44de-a3b3-a6ea31c16692" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.800539 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b81a5da-2c44-44de-a3b3-a6ea31c16692" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.800763 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b81a5da-2c44-44de-a3b3-a6ea31c16692" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.800779 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d371a6c1-69f3-4b6d-a68b-7bd70ea0d77d" containerName="registry-server" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.801654 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.808192 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.808281 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.808425 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.808786 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.840029 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg"] Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.941816 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdwqc\" (UniqueName: \"kubernetes.io/projected/c4a3642c-422b-460f-9554-6bcaeb591ea2-kube-api-access-vdwqc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x4msg\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.942105 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x4msg\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:14:54 crc kubenswrapper[4823]: I0126 15:14:54.942156 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x4msg\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:14:55 crc kubenswrapper[4823]: I0126 15:14:55.064671 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x4msg\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:14:55 crc kubenswrapper[4823]: I0126 15:14:55.064924 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x4msg\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:14:55 crc kubenswrapper[4823]: I0126 15:14:55.065219 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdwqc\" (UniqueName: \"kubernetes.io/projected/c4a3642c-422b-460f-9554-6bcaeb591ea2-kube-api-access-vdwqc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x4msg\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:14:55 crc kubenswrapper[4823]: I0126 15:14:55.099786 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x4msg\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:14:55 crc kubenswrapper[4823]: I0126 15:14:55.103112 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x4msg\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:14:55 crc kubenswrapper[4823]: I0126 15:14:55.103415 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdwqc\" (UniqueName: \"kubernetes.io/projected/c4a3642c-422b-460f-9554-6bcaeb591ea2-kube-api-access-vdwqc\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-x4msg\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:14:55 crc kubenswrapper[4823]: I0126 15:14:55.129010 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:14:55 crc kubenswrapper[4823]: I0126 15:14:55.698768 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg"] Jan 26 15:14:55 crc kubenswrapper[4823]: I0126 15:14:55.708125 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:14:56 crc kubenswrapper[4823]: I0126 15:14:56.710146 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" event={"ID":"c4a3642c-422b-460f-9554-6bcaeb591ea2","Type":"ContainerStarted","Data":"e273fc5e4695a563b624dd1525dbbc6dedd2a0617ca51f706a4c61b041486b5e"} Jan 26 15:14:57 crc kubenswrapper[4823]: I0126 15:14:57.566130 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:14:57 crc kubenswrapper[4823]: E0126 15:14:57.567016 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:14:57 crc kubenswrapper[4823]: I0126 15:14:57.728148 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" event={"ID":"c4a3642c-422b-460f-9554-6bcaeb591ea2","Type":"ContainerStarted","Data":"d3929e4a14c21df8fee731e99ffe9ccf66d32cf925928eccf332628a03810fd8"} Jan 26 15:14:57 crc kubenswrapper[4823]: I0126 15:14:57.753771 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" podStartSLOduration=2.816916182 podStartE2EDuration="3.75374549s" podCreationTimestamp="2026-01-26 15:14:54 +0000 UTC" firstStartedPulling="2026-01-26 15:14:55.707824654 +0000 UTC m=+1692.393287759" lastFinishedPulling="2026-01-26 15:14:56.644653962 +0000 UTC m=+1693.330117067" observedRunningTime="2026-01-26 15:14:57.750570274 +0000 UTC m=+1694.436033399" watchObservedRunningTime="2026-01-26 15:14:57.75374549 +0000 UTC m=+1694.439208605" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.139749 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb"] Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.143944 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.146438 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.146441 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.160405 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb"] Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.304680 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/837cede5-7802-40a7-a31f-09df765035ac-config-volume\") pod \"collect-profiles-29490675-75pbb\" (UID: \"837cede5-7802-40a7-a31f-09df765035ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.304859 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrf4j\" (UniqueName: \"kubernetes.io/projected/837cede5-7802-40a7-a31f-09df765035ac-kube-api-access-rrf4j\") pod \"collect-profiles-29490675-75pbb\" (UID: \"837cede5-7802-40a7-a31f-09df765035ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.304993 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/837cede5-7802-40a7-a31f-09df765035ac-secret-volume\") pod \"collect-profiles-29490675-75pbb\" (UID: \"837cede5-7802-40a7-a31f-09df765035ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.406883 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/837cede5-7802-40a7-a31f-09df765035ac-config-volume\") pod \"collect-profiles-29490675-75pbb\" (UID: \"837cede5-7802-40a7-a31f-09df765035ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.407139 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrf4j\" (UniqueName: \"kubernetes.io/projected/837cede5-7802-40a7-a31f-09df765035ac-kube-api-access-rrf4j\") pod \"collect-profiles-29490675-75pbb\" (UID: \"837cede5-7802-40a7-a31f-09df765035ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.407221 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/837cede5-7802-40a7-a31f-09df765035ac-secret-volume\") pod \"collect-profiles-29490675-75pbb\" (UID: \"837cede5-7802-40a7-a31f-09df765035ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.407893 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/837cede5-7802-40a7-a31f-09df765035ac-config-volume\") pod \"collect-profiles-29490675-75pbb\" (UID: \"837cede5-7802-40a7-a31f-09df765035ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.420579 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/837cede5-7802-40a7-a31f-09df765035ac-secret-volume\") pod \"collect-profiles-29490675-75pbb\" (UID: \"837cede5-7802-40a7-a31f-09df765035ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.441671 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrf4j\" (UniqueName: \"kubernetes.io/projected/837cede5-7802-40a7-a31f-09df765035ac-kube-api-access-rrf4j\") pod \"collect-profiles-29490675-75pbb\" (UID: \"837cede5-7802-40a7-a31f-09df765035ac\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.471948 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:00 crc kubenswrapper[4823]: I0126 15:15:00.937717 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb"] Jan 26 15:15:00 crc kubenswrapper[4823]: W0126 15:15:00.941641 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod837cede5_7802_40a7_a31f_09df765035ac.slice/crio-b282807846d925e99ae37dd8d9bde9310837b6dd7885c09e5955fd755f28a127 WatchSource:0}: Error finding container b282807846d925e99ae37dd8d9bde9310837b6dd7885c09e5955fd755f28a127: Status 404 returned error can't find the container with id b282807846d925e99ae37dd8d9bde9310837b6dd7885c09e5955fd755f28a127 Jan 26 15:15:01 crc kubenswrapper[4823]: I0126 15:15:01.765399 4823 generic.go:334] "Generic (PLEG): container finished" podID="837cede5-7802-40a7-a31f-09df765035ac" containerID="49a1a46767a1100f302a39c595025cc13cea955f88daa4174b783c4039320cdb" exitCode=0 Jan 26 15:15:01 crc kubenswrapper[4823]: I0126 15:15:01.765462 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" event={"ID":"837cede5-7802-40a7-a31f-09df765035ac","Type":"ContainerDied","Data":"49a1a46767a1100f302a39c595025cc13cea955f88daa4174b783c4039320cdb"} Jan 26 15:15:01 crc kubenswrapper[4823]: I0126 15:15:01.765734 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" event={"ID":"837cede5-7802-40a7-a31f-09df765035ac","Type":"ContainerStarted","Data":"b282807846d925e99ae37dd8d9bde9310837b6dd7885c09e5955fd755f28a127"} Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.090267 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.192156 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/837cede5-7802-40a7-a31f-09df765035ac-config-volume\") pod \"837cede5-7802-40a7-a31f-09df765035ac\" (UID: \"837cede5-7802-40a7-a31f-09df765035ac\") " Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.192744 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/837cede5-7802-40a7-a31f-09df765035ac-secret-volume\") pod \"837cede5-7802-40a7-a31f-09df765035ac\" (UID: \"837cede5-7802-40a7-a31f-09df765035ac\") " Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.192823 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrf4j\" (UniqueName: \"kubernetes.io/projected/837cede5-7802-40a7-a31f-09df765035ac-kube-api-access-rrf4j\") pod \"837cede5-7802-40a7-a31f-09df765035ac\" (UID: \"837cede5-7802-40a7-a31f-09df765035ac\") " Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.193067 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/837cede5-7802-40a7-a31f-09df765035ac-config-volume" (OuterVolumeSpecName: "config-volume") pod "837cede5-7802-40a7-a31f-09df765035ac" (UID: "837cede5-7802-40a7-a31f-09df765035ac"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.193320 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/837cede5-7802-40a7-a31f-09df765035ac-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.204755 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/837cede5-7802-40a7-a31f-09df765035ac-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "837cede5-7802-40a7-a31f-09df765035ac" (UID: "837cede5-7802-40a7-a31f-09df765035ac"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.204875 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837cede5-7802-40a7-a31f-09df765035ac-kube-api-access-rrf4j" (OuterVolumeSpecName: "kube-api-access-rrf4j") pod "837cede5-7802-40a7-a31f-09df765035ac" (UID: "837cede5-7802-40a7-a31f-09df765035ac"). InnerVolumeSpecName "kube-api-access-rrf4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.295271 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/837cede5-7802-40a7-a31f-09df765035ac-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.295349 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrf4j\" (UniqueName: \"kubernetes.io/projected/837cede5-7802-40a7-a31f-09df765035ac-kube-api-access-rrf4j\") on node \"crc\" DevicePath \"\"" Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.801516 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" event={"ID":"837cede5-7802-40a7-a31f-09df765035ac","Type":"ContainerDied","Data":"b282807846d925e99ae37dd8d9bde9310837b6dd7885c09e5955fd755f28a127"} Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.801591 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b282807846d925e99ae37dd8d9bde9310837b6dd7885c09e5955fd755f28a127" Jan 26 15:15:03 crc kubenswrapper[4823]: I0126 15:15:03.801653 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb" Jan 26 15:15:08 crc kubenswrapper[4823]: I0126 15:15:08.560617 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:15:08 crc kubenswrapper[4823]: E0126 15:15:08.561922 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:15:21 crc kubenswrapper[4823]: I0126 15:15:21.561053 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:15:21 crc kubenswrapper[4823]: E0126 15:15:21.562198 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:15:34 crc kubenswrapper[4823]: I0126 15:15:34.560779 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:15:34 crc kubenswrapper[4823]: E0126 15:15:34.562141 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.070338 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-z4kl4"] Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.084860 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-wdc2m"] Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.095293 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a730-account-create-update-4grxj"] Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.105452 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e187-account-create-update-ktfqv"] Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.115324 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1276-account-create-update-7vdml"] Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.123652 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gpxrj"] Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.134126 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-z4kl4"] Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.142078 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e187-account-create-update-ktfqv"] Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.149850 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a730-account-create-update-4grxj"] Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.158201 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1276-account-create-update-7vdml"] Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.167273 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-wdc2m"] Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.175448 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gpxrj"] Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.573725 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11a49820-f006-42b2-8441-525ca8601f6c" path="/var/lib/kubelet/pods/11a49820-f006-42b2-8441-525ca8601f6c/volumes" Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.574919 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f5abdd-e891-46c4-87ef-b6446b54bf07" path="/var/lib/kubelet/pods/18f5abdd-e891-46c4-87ef-b6446b54bf07/volumes" Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.575580 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eb6be81-80b7-40c3-a17e-f09cc5c0715f" path="/var/lib/kubelet/pods/3eb6be81-80b7-40c3-a17e-f09cc5c0715f/volumes" Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.576467 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d5fed33-52f8-4a1a-9096-794711814cf5" path="/var/lib/kubelet/pods/5d5fed33-52f8-4a1a-9096-794711814cf5/volumes" Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.578118 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac560032-e524-45c4-bc11-a960f50c4f07" path="/var/lib/kubelet/pods/ac560032-e524-45c4-bc11-a960f50c4f07/volumes" Jan 26 15:15:43 crc kubenswrapper[4823]: I0126 15:15:43.578736 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e935f38b-5459-4bcc-a9f0-50e5cecef101" path="/var/lib/kubelet/pods/e935f38b-5459-4bcc-a9f0-50e5cecef101/volumes" Jan 26 15:15:47 crc kubenswrapper[4823]: I0126 15:15:47.560871 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:15:47 crc kubenswrapper[4823]: E0126 15:15:47.561498 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:15:58 crc kubenswrapper[4823]: I0126 15:15:58.578535 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:15:58 crc kubenswrapper[4823]: E0126 15:15:58.583036 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:16:03 crc kubenswrapper[4823]: I0126 15:16:03.838271 4823 scope.go:117] "RemoveContainer" containerID="e4946988874a34a93ea4ddb0e1df44a2b56b4ef55e05d0bf6306865899a8d489" Jan 26 15:16:03 crc kubenswrapper[4823]: I0126 15:16:03.876748 4823 scope.go:117] "RemoveContainer" containerID="5e05c66a05b73f1d03b73657de8bcb53fe8aa6bccdf2cf98b74431e4e785a48a" Jan 26 15:16:03 crc kubenswrapper[4823]: I0126 15:16:03.934575 4823 scope.go:117] "RemoveContainer" containerID="1390be81e78856b4dc56442c7c082b9c4d9ff0d32b7d3e8fbb68a2596f2a3248" Jan 26 15:16:03 crc kubenswrapper[4823]: I0126 15:16:03.980723 4823 scope.go:117] "RemoveContainer" containerID="e355dcc85d283945c0300a5a345e025a45c32489e4ad3e9b351abf5b857e479b" Jan 26 15:16:04 crc kubenswrapper[4823]: I0126 15:16:04.026646 4823 scope.go:117] "RemoveContainer" containerID="cffa217ccc50d370619647ad0d153f771048807f0c8af80b4c910be8fe0ca577" Jan 26 15:16:04 crc kubenswrapper[4823]: I0126 15:16:04.060481 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-xjz88"] Jan 26 15:16:04 crc kubenswrapper[4823]: I0126 15:16:04.068341 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-xjz88"] Jan 26 15:16:04 crc kubenswrapper[4823]: I0126 15:16:04.068920 4823 scope.go:117] "RemoveContainer" containerID="5ddced960be17ed81d13233645b6eeb12796e1ebabc5f2916ef8eb859ca99c57" Jan 26 15:16:04 crc kubenswrapper[4823]: I0126 15:16:04.093523 4823 scope.go:117] "RemoveContainer" containerID="466100ee52c96eb0a38bf74ee44711b88ab0aeab29df34f0dc7120cc8f0d56d2" Jan 26 15:16:04 crc kubenswrapper[4823]: I0126 15:16:04.145795 4823 scope.go:117] "RemoveContainer" containerID="73c3e7a5a99229b0ad8e4daa3f0a0a7857d02c7977e5f5c32c4a12fceadd5038" Jan 26 15:16:05 crc kubenswrapper[4823]: I0126 15:16:05.573061 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee" path="/var/lib/kubelet/pods/3aebf75b-969e-49ec-9be3-4f7a9ef3a2ee/volumes" Jan 26 15:16:07 crc kubenswrapper[4823]: I0126 15:16:07.451852 4823 generic.go:334] "Generic (PLEG): container finished" podID="c4a3642c-422b-460f-9554-6bcaeb591ea2" containerID="d3929e4a14c21df8fee731e99ffe9ccf66d32cf925928eccf332628a03810fd8" exitCode=0 Jan 26 15:16:07 crc kubenswrapper[4823]: I0126 15:16:07.451947 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" event={"ID":"c4a3642c-422b-460f-9554-6bcaeb591ea2","Type":"ContainerDied","Data":"d3929e4a14c21df8fee731e99ffe9ccf66d32cf925928eccf332628a03810fd8"} Jan 26 15:16:08 crc kubenswrapper[4823]: I0126 15:16:08.844261 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:16:08 crc kubenswrapper[4823]: I0126 15:16:08.964174 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdwqc\" (UniqueName: \"kubernetes.io/projected/c4a3642c-422b-460f-9554-6bcaeb591ea2-kube-api-access-vdwqc\") pod \"c4a3642c-422b-460f-9554-6bcaeb591ea2\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " Jan 26 15:16:08 crc kubenswrapper[4823]: I0126 15:16:08.964278 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-ssh-key-openstack-edpm-ipam\") pod \"c4a3642c-422b-460f-9554-6bcaeb591ea2\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " Jan 26 15:16:08 crc kubenswrapper[4823]: I0126 15:16:08.964444 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-inventory\") pod \"c4a3642c-422b-460f-9554-6bcaeb591ea2\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " Jan 26 15:16:08 crc kubenswrapper[4823]: I0126 15:16:08.973268 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4a3642c-422b-460f-9554-6bcaeb591ea2-kube-api-access-vdwqc" (OuterVolumeSpecName: "kube-api-access-vdwqc") pod "c4a3642c-422b-460f-9554-6bcaeb591ea2" (UID: "c4a3642c-422b-460f-9554-6bcaeb591ea2"). InnerVolumeSpecName "kube-api-access-vdwqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:16:09 crc kubenswrapper[4823]: E0126 15:16:09.015197 4823 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-ssh-key-openstack-edpm-ipam podName:c4a3642c-422b-460f-9554-6bcaeb591ea2 nodeName:}" failed. No retries permitted until 2026-01-26 15:16:09.515163146 +0000 UTC m=+1766.200626261 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-ssh-key-openstack-edpm-ipam") pod "c4a3642c-422b-460f-9554-6bcaeb591ea2" (UID: "c4a3642c-422b-460f-9554-6bcaeb591ea2") : error deleting /var/lib/kubelet/pods/c4a3642c-422b-460f-9554-6bcaeb591ea2/volume-subpaths: remove /var/lib/kubelet/pods/c4a3642c-422b-460f-9554-6bcaeb591ea2/volume-subpaths: no such file or directory Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.020119 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-inventory" (OuterVolumeSpecName: "inventory") pod "c4a3642c-422b-460f-9554-6bcaeb591ea2" (UID: "c4a3642c-422b-460f-9554-6bcaeb591ea2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.066992 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdwqc\" (UniqueName: \"kubernetes.io/projected/c4a3642c-422b-460f-9554-6bcaeb591ea2-kube-api-access-vdwqc\") on node \"crc\" DevicePath \"\"" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.067057 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.483692 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" event={"ID":"c4a3642c-422b-460f-9554-6bcaeb591ea2","Type":"ContainerDied","Data":"e273fc5e4695a563b624dd1525dbbc6dedd2a0617ca51f706a4c61b041486b5e"} Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.483755 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e273fc5e4695a563b624dd1525dbbc6dedd2a0617ca51f706a4c61b041486b5e" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.483793 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.579003 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-ssh-key-openstack-edpm-ipam\") pod \"c4a3642c-422b-460f-9554-6bcaeb591ea2\" (UID: \"c4a3642c-422b-460f-9554-6bcaeb591ea2\") " Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.585025 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c4a3642c-422b-460f-9554-6bcaeb591ea2" (UID: "c4a3642c-422b-460f-9554-6bcaeb591ea2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.597814 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf"] Jan 26 15:16:09 crc kubenswrapper[4823]: E0126 15:16:09.598263 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837cede5-7802-40a7-a31f-09df765035ac" containerName="collect-profiles" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.598291 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="837cede5-7802-40a7-a31f-09df765035ac" containerName="collect-profiles" Jan 26 15:16:09 crc kubenswrapper[4823]: E0126 15:16:09.598305 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4a3642c-422b-460f-9554-6bcaeb591ea2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.598315 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4a3642c-422b-460f-9554-6bcaeb591ea2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.598596 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4a3642c-422b-460f-9554-6bcaeb591ea2" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.598625 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="837cede5-7802-40a7-a31f-09df765035ac" containerName="collect-profiles" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.599541 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.611071 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf"] Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.681044 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pld96\" (UniqueName: \"kubernetes.io/projected/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-kube-api-access-pld96\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf\" (UID: \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.681108 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf\" (UID: \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.681209 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf\" (UID: \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.681296 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c4a3642c-422b-460f-9554-6bcaeb591ea2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.782928 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pld96\" (UniqueName: \"kubernetes.io/projected/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-kube-api-access-pld96\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf\" (UID: \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.783022 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf\" (UID: \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.783124 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf\" (UID: \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.788756 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf\" (UID: \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.795592 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf\" (UID: \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.812435 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pld96\" (UniqueName: \"kubernetes.io/projected/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-kube-api-access-pld96\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf\" (UID: \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:09 crc kubenswrapper[4823]: I0126 15:16:09.948718 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:10 crc kubenswrapper[4823]: I0126 15:16:10.044677 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-7wvns"] Jan 26 15:16:10 crc kubenswrapper[4823]: I0126 15:16:10.055961 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-7wvns"] Jan 26 15:16:10 crc kubenswrapper[4823]: I0126 15:16:10.495300 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf"] Jan 26 15:16:10 crc kubenswrapper[4823]: I0126 15:16:10.561586 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:16:10 crc kubenswrapper[4823]: E0126 15:16:10.561897 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.054538 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5021-account-create-update-mpthz"] Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.066025 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-fzqj6"] Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.075171 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-8rqmc"] Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.084908 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-39a1-account-create-update-46zz2"] Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.093457 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-f352-account-create-update-fzwc2"] Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.100464 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5021-account-create-update-mpthz"] Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.106944 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-f352-account-create-update-fzwc2"] Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.113735 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-39a1-account-create-update-46zz2"] Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.122525 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-8rqmc"] Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.153690 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-fzqj6"] Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.508474 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" event={"ID":"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5","Type":"ContainerStarted","Data":"8528170ea1d57250f9836b2b96ae6b103b69c52b973fd905e384d830ab500229"} Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.509062 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" event={"ID":"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5","Type":"ContainerStarted","Data":"d7878012a7057948cc18006c7e20a236730ade7c4434b38453095406c9c04b74"} Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.535896 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" podStartSLOduration=2.057417083 podStartE2EDuration="2.535877518s" podCreationTimestamp="2026-01-26 15:16:09 +0000 UTC" firstStartedPulling="2026-01-26 15:16:10.496935062 +0000 UTC m=+1767.182398177" lastFinishedPulling="2026-01-26 15:16:10.975395507 +0000 UTC m=+1767.660858612" observedRunningTime="2026-01-26 15:16:11.528821387 +0000 UTC m=+1768.214284512" watchObservedRunningTime="2026-01-26 15:16:11.535877518 +0000 UTC m=+1768.221340623" Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.577878 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c1a3789-de6b-4030-ab64-a9f504133124" path="/var/lib/kubelet/pods/0c1a3789-de6b-4030-ab64-a9f504133124/volumes" Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.580408 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="677c0fed-1e1f-4155-95ee-86291a16effa" path="/var/lib/kubelet/pods/677c0fed-1e1f-4155-95ee-86291a16effa/volumes" Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.581674 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c7d2689-33ea-47e3-ae2a-ad3b80f526b6" path="/var/lib/kubelet/pods/8c7d2689-33ea-47e3-ae2a-ad3b80f526b6/volumes" Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.582551 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92f40fd5-6264-4e1c-a0ff-94f71a0d994c" path="/var/lib/kubelet/pods/92f40fd5-6264-4e1c-a0ff-94f71a0d994c/volumes" Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.584720 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e91b70-4f0c-4abc-bbb5-c7f75dc94736" path="/var/lib/kubelet/pods/a8e91b70-4f0c-4abc-bbb5-c7f75dc94736/volumes" Jan 26 15:16:11 crc kubenswrapper[4823]: I0126 15:16:11.585681 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4df5511-77f2-4005-9179-933a42374141" path="/var/lib/kubelet/pods/e4df5511-77f2-4005-9179-933a42374141/volumes" Jan 26 15:16:15 crc kubenswrapper[4823]: I0126 15:16:15.043334 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-gp7n5"] Jan 26 15:16:15 crc kubenswrapper[4823]: I0126 15:16:15.052270 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-gp7n5"] Jan 26 15:16:15 crc kubenswrapper[4823]: I0126 15:16:15.611951 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="461e74af-b7a9-4451-a07d-42f47a806286" path="/var/lib/kubelet/pods/461e74af-b7a9-4451-a07d-42f47a806286/volumes" Jan 26 15:16:16 crc kubenswrapper[4823]: I0126 15:16:16.554638 4823 generic.go:334] "Generic (PLEG): container finished" podID="635f75a1-b5f4-46ac-88ac-dfdd03b5cec5" containerID="8528170ea1d57250f9836b2b96ae6b103b69c52b973fd905e384d830ab500229" exitCode=0 Jan 26 15:16:16 crc kubenswrapper[4823]: I0126 15:16:16.554695 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" event={"ID":"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5","Type":"ContainerDied","Data":"8528170ea1d57250f9836b2b96ae6b103b69c52b973fd905e384d830ab500229"} Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.084405 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.150733 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pld96\" (UniqueName: \"kubernetes.io/projected/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-kube-api-access-pld96\") pod \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\" (UID: \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\") " Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.150815 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-ssh-key-openstack-edpm-ipam\") pod \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\" (UID: \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\") " Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.151004 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-inventory\") pod \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\" (UID: \"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5\") " Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.156730 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-kube-api-access-pld96" (OuterVolumeSpecName: "kube-api-access-pld96") pod "635f75a1-b5f4-46ac-88ac-dfdd03b5cec5" (UID: "635f75a1-b5f4-46ac-88ac-dfdd03b5cec5"). InnerVolumeSpecName "kube-api-access-pld96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.181407 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "635f75a1-b5f4-46ac-88ac-dfdd03b5cec5" (UID: "635f75a1-b5f4-46ac-88ac-dfdd03b5cec5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.181984 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-inventory" (OuterVolumeSpecName: "inventory") pod "635f75a1-b5f4-46ac-88ac-dfdd03b5cec5" (UID: "635f75a1-b5f4-46ac-88ac-dfdd03b5cec5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.255255 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pld96\" (UniqueName: \"kubernetes.io/projected/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-kube-api-access-pld96\") on node \"crc\" DevicePath \"\"" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.255340 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.255357 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.603737 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" event={"ID":"635f75a1-b5f4-46ac-88ac-dfdd03b5cec5","Type":"ContainerDied","Data":"d7878012a7057948cc18006c7e20a236730ade7c4434b38453095406c9c04b74"} Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.603836 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7878012a7057948cc18006c7e20a236730ade7c4434b38453095406c9c04b74" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.603963 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.678557 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx"] Jan 26 15:16:18 crc kubenswrapper[4823]: E0126 15:16:18.679638 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="635f75a1-b5f4-46ac-88ac-dfdd03b5cec5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.679680 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="635f75a1-b5f4-46ac-88ac-dfdd03b5cec5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.680111 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="635f75a1-b5f4-46ac-88ac-dfdd03b5cec5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.681164 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.683604 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.683823 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.683953 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.684626 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.719315 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx"] Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.766534 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/632b9de7-75fd-44b2-94fb-faf1a6f005ef-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4lssx\" (UID: \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.766633 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/632b9de7-75fd-44b2-94fb-faf1a6f005ef-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4lssx\" (UID: \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.766717 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxfhh\" (UniqueName: \"kubernetes.io/projected/632b9de7-75fd-44b2-94fb-faf1a6f005ef-kube-api-access-gxfhh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4lssx\" (UID: \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.867804 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxfhh\" (UniqueName: \"kubernetes.io/projected/632b9de7-75fd-44b2-94fb-faf1a6f005ef-kube-api-access-gxfhh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4lssx\" (UID: \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.867905 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/632b9de7-75fd-44b2-94fb-faf1a6f005ef-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4lssx\" (UID: \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.867960 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/632b9de7-75fd-44b2-94fb-faf1a6f005ef-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4lssx\" (UID: \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.872492 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/632b9de7-75fd-44b2-94fb-faf1a6f005ef-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4lssx\" (UID: \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.872859 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/632b9de7-75fd-44b2-94fb-faf1a6f005ef-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4lssx\" (UID: \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:16:18 crc kubenswrapper[4823]: I0126 15:16:18.907307 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxfhh\" (UniqueName: \"kubernetes.io/projected/632b9de7-75fd-44b2-94fb-faf1a6f005ef-kube-api-access-gxfhh\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-4lssx\" (UID: \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:16:19 crc kubenswrapper[4823]: I0126 15:16:19.011137 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:16:19 crc kubenswrapper[4823]: I0126 15:16:19.558053 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx"] Jan 26 15:16:19 crc kubenswrapper[4823]: I0126 15:16:19.614502 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" event={"ID":"632b9de7-75fd-44b2-94fb-faf1a6f005ef","Type":"ContainerStarted","Data":"ac72d59dadac55a9031bd651e1900744ab048af9665c7a4519d8f870b8786184"} Jan 26 15:16:20 crc kubenswrapper[4823]: I0126 15:16:20.627593 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" event={"ID":"632b9de7-75fd-44b2-94fb-faf1a6f005ef","Type":"ContainerStarted","Data":"4538880b5a3252353b31f9baa136b6932381849f7becf322e2b6315f4ecc54c1"} Jan 26 15:16:20 crc kubenswrapper[4823]: I0126 15:16:20.660128 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" podStartSLOduration=2.234946966 podStartE2EDuration="2.660106267s" podCreationTimestamp="2026-01-26 15:16:18 +0000 UTC" firstStartedPulling="2026-01-26 15:16:19.56657553 +0000 UTC m=+1776.252038625" lastFinishedPulling="2026-01-26 15:16:19.991734791 +0000 UTC m=+1776.677197926" observedRunningTime="2026-01-26 15:16:20.654895426 +0000 UTC m=+1777.340358551" watchObservedRunningTime="2026-01-26 15:16:20.660106267 +0000 UTC m=+1777.345569382" Jan 26 15:16:22 crc kubenswrapper[4823]: I0126 15:16:22.035539 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-h7c79"] Jan 26 15:16:22 crc kubenswrapper[4823]: I0126 15:16:22.044542 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-h7c79"] Jan 26 15:16:23 crc kubenswrapper[4823]: I0126 15:16:23.569559 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c20b6e53-0f09-4af8-8d2b-02c1d50e3730" path="/var/lib/kubelet/pods/c20b6e53-0f09-4af8-8d2b-02c1d50e3730/volumes" Jan 26 15:16:24 crc kubenswrapper[4823]: I0126 15:16:24.561475 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:16:24 crc kubenswrapper[4823]: E0126 15:16:24.561885 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:16:38 crc kubenswrapper[4823]: I0126 15:16:38.560316 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:16:38 crc kubenswrapper[4823]: E0126 15:16:38.562921 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:16:53 crc kubenswrapper[4823]: I0126 15:16:53.566992 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:16:53 crc kubenswrapper[4823]: E0126 15:16:53.568208 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:17:03 crc kubenswrapper[4823]: I0126 15:17:03.038972 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" event={"ID":"632b9de7-75fd-44b2-94fb-faf1a6f005ef","Type":"ContainerDied","Data":"4538880b5a3252353b31f9baa136b6932381849f7becf322e2b6315f4ecc54c1"} Jan 26 15:17:03 crc kubenswrapper[4823]: I0126 15:17:03.039193 4823 generic.go:334] "Generic (PLEG): container finished" podID="632b9de7-75fd-44b2-94fb-faf1a6f005ef" containerID="4538880b5a3252353b31f9baa136b6932381849f7becf322e2b6315f4ecc54c1" exitCode=0 Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.350303 4823 scope.go:117] "RemoveContainer" containerID="12096333952f0dd8c96cb162ae9d50ed9b683b0f58988ffc754e393865453295" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.387022 4823 scope.go:117] "RemoveContainer" containerID="609c61bad94595b78b905ced3dc30429d010fd1a0abaff984a6aad556b09de3f" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.452096 4823 scope.go:117] "RemoveContainer" containerID="59e57f5606f7c786d0d950e89e03a9153bb90c3a4aef2df8a2b31c2e10e0f846" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.494836 4823 scope.go:117] "RemoveContainer" containerID="61e98fa470a3788913eade183aa51901c129b80e1df8aa8cfc6dcd0643ab2ae2" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.567251 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.594445 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxfhh\" (UniqueName: \"kubernetes.io/projected/632b9de7-75fd-44b2-94fb-faf1a6f005ef-kube-api-access-gxfhh\") pod \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\" (UID: \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\") " Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.594568 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/632b9de7-75fd-44b2-94fb-faf1a6f005ef-ssh-key-openstack-edpm-ipam\") pod \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\" (UID: \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\") " Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.594816 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/632b9de7-75fd-44b2-94fb-faf1a6f005ef-inventory\") pod \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\" (UID: \"632b9de7-75fd-44b2-94fb-faf1a6f005ef\") " Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.603397 4823 scope.go:117] "RemoveContainer" containerID="820a57e11cdaf4ced5c31c449c7034323316145b9f536e997874a6a3d2bec6f7" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.617300 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/632b9de7-75fd-44b2-94fb-faf1a6f005ef-kube-api-access-gxfhh" (OuterVolumeSpecName: "kube-api-access-gxfhh") pod "632b9de7-75fd-44b2-94fb-faf1a6f005ef" (UID: "632b9de7-75fd-44b2-94fb-faf1a6f005ef"). InnerVolumeSpecName "kube-api-access-gxfhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.634257 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632b9de7-75fd-44b2-94fb-faf1a6f005ef-inventory" (OuterVolumeSpecName: "inventory") pod "632b9de7-75fd-44b2-94fb-faf1a6f005ef" (UID: "632b9de7-75fd-44b2-94fb-faf1a6f005ef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.634563 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/632b9de7-75fd-44b2-94fb-faf1a6f005ef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "632b9de7-75fd-44b2-94fb-faf1a6f005ef" (UID: "632b9de7-75fd-44b2-94fb-faf1a6f005ef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.640571 4823 scope.go:117] "RemoveContainer" containerID="21879019d045fc133189ab18c89f0b4a011c4162f170d998769cf5821288791b" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.669224 4823 scope.go:117] "RemoveContainer" containerID="2b43a4cafb4611e599baf8abf6a3faa08c08c49454c7a4966390ccd4cdf30156" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.689971 4823 scope.go:117] "RemoveContainer" containerID="653aca22ea7335d198580d40a1e9271aeb3f5ad2e813b7743b369da49739b642" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.700945 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/632b9de7-75fd-44b2-94fb-faf1a6f005ef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.700972 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/632b9de7-75fd-44b2-94fb-faf1a6f005ef-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.700982 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxfhh\" (UniqueName: \"kubernetes.io/projected/632b9de7-75fd-44b2-94fb-faf1a6f005ef-kube-api-access-gxfhh\") on node \"crc\" DevicePath \"\"" Jan 26 15:17:04 crc kubenswrapper[4823]: I0126 15:17:04.709955 4823 scope.go:117] "RemoveContainer" containerID="112560be173f448e111d3a3f526c688af0e81d8d2843ed84641543d53d69277f" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.061073 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" event={"ID":"632b9de7-75fd-44b2-94fb-faf1a6f005ef","Type":"ContainerDied","Data":"ac72d59dadac55a9031bd651e1900744ab048af9665c7a4519d8f870b8786184"} Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.061122 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac72d59dadac55a9031bd651e1900744ab048af9665c7a4519d8f870b8786184" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.061146 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.140566 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75"] Jan 26 15:17:05 crc kubenswrapper[4823]: E0126 15:17:05.141184 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="632b9de7-75fd-44b2-94fb-faf1a6f005ef" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.141210 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="632b9de7-75fd-44b2-94fb-faf1a6f005ef" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.141499 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="632b9de7-75fd-44b2-94fb-faf1a6f005ef" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.142417 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.144672 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.144741 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.144922 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.145605 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.167102 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75"] Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.244652 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn2nb\" (UniqueName: \"kubernetes.io/projected/86108dca-c7b6-4737-83b2-6b665cfdd9b4-kube-api-access-xn2nb\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75\" (UID: \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.244984 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86108dca-c7b6-4737-83b2-6b665cfdd9b4-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75\" (UID: \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.245015 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86108dca-c7b6-4737-83b2-6b665cfdd9b4-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75\" (UID: \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.346326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn2nb\" (UniqueName: \"kubernetes.io/projected/86108dca-c7b6-4737-83b2-6b665cfdd9b4-kube-api-access-xn2nb\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75\" (UID: \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.346453 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86108dca-c7b6-4737-83b2-6b665cfdd9b4-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75\" (UID: \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.346497 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86108dca-c7b6-4737-83b2-6b665cfdd9b4-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75\" (UID: \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.351971 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86108dca-c7b6-4737-83b2-6b665cfdd9b4-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75\" (UID: \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.354435 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86108dca-c7b6-4737-83b2-6b665cfdd9b4-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75\" (UID: \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.409642 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn2nb\" (UniqueName: \"kubernetes.io/projected/86108dca-c7b6-4737-83b2-6b665cfdd9b4-kube-api-access-xn2nb\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75\" (UID: \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.461336 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:05 crc kubenswrapper[4823]: I0126 15:17:05.967382 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75"] Jan 26 15:17:06 crc kubenswrapper[4823]: I0126 15:17:06.068689 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" event={"ID":"86108dca-c7b6-4737-83b2-6b665cfdd9b4","Type":"ContainerStarted","Data":"263e7dc8d526afa1aa6b0c58ede38d481d174c693c8de47d4867b3ae6f6b0669"} Jan 26 15:17:07 crc kubenswrapper[4823]: I0126 15:17:07.080047 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" event={"ID":"86108dca-c7b6-4737-83b2-6b665cfdd9b4","Type":"ContainerStarted","Data":"d0c5d128c64e93bfe79ca8db80100ff913f598c449ffcf056f8a571f5971fe1e"} Jan 26 15:17:07 crc kubenswrapper[4823]: I0126 15:17:07.103348 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" podStartSLOduration=1.54639369 podStartE2EDuration="2.103330373s" podCreationTimestamp="2026-01-26 15:17:05 +0000 UTC" firstStartedPulling="2026-01-26 15:17:05.980833822 +0000 UTC m=+1822.666296967" lastFinishedPulling="2026-01-26 15:17:06.537770545 +0000 UTC m=+1823.223233650" observedRunningTime="2026-01-26 15:17:07.102141741 +0000 UTC m=+1823.787604886" watchObservedRunningTime="2026-01-26 15:17:07.103330373 +0000 UTC m=+1823.788793478" Jan 26 15:17:08 crc kubenswrapper[4823]: I0126 15:17:08.561499 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:17:08 crc kubenswrapper[4823]: E0126 15:17:08.562266 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:17:09 crc kubenswrapper[4823]: I0126 15:17:09.060695 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lnlnm"] Jan 26 15:17:09 crc kubenswrapper[4823]: I0126 15:17:09.070514 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-9c2rp"] Jan 26 15:17:09 crc kubenswrapper[4823]: I0126 15:17:09.081879 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-9c2rp"] Jan 26 15:17:09 crc kubenswrapper[4823]: I0126 15:17:09.093231 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lnlnm"] Jan 26 15:17:09 crc kubenswrapper[4823]: I0126 15:17:09.571319 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71eea416-ec1b-47dd-a6e2-b56ebb89a07f" path="/var/lib/kubelet/pods/71eea416-ec1b-47dd-a6e2-b56ebb89a07f/volumes" Jan 26 15:17:09 crc kubenswrapper[4823]: I0126 15:17:09.572809 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e" path="/var/lib/kubelet/pods/8e9fdb8c-d662-4be9-a9c3-d56d9a44f92e/volumes" Jan 26 15:17:11 crc kubenswrapper[4823]: I0126 15:17:11.118484 4823 generic.go:334] "Generic (PLEG): container finished" podID="86108dca-c7b6-4737-83b2-6b665cfdd9b4" containerID="d0c5d128c64e93bfe79ca8db80100ff913f598c449ffcf056f8a571f5971fe1e" exitCode=0 Jan 26 15:17:11 crc kubenswrapper[4823]: I0126 15:17:11.118581 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" event={"ID":"86108dca-c7b6-4737-83b2-6b665cfdd9b4","Type":"ContainerDied","Data":"d0c5d128c64e93bfe79ca8db80100ff913f598c449ffcf056f8a571f5971fe1e"} Jan 26 15:17:12 crc kubenswrapper[4823]: I0126 15:17:12.621160 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:12 crc kubenswrapper[4823]: I0126 15:17:12.704003 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86108dca-c7b6-4737-83b2-6b665cfdd9b4-ssh-key-openstack-edpm-ipam\") pod \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\" (UID: \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\") " Jan 26 15:17:12 crc kubenswrapper[4823]: I0126 15:17:12.704191 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn2nb\" (UniqueName: \"kubernetes.io/projected/86108dca-c7b6-4737-83b2-6b665cfdd9b4-kube-api-access-xn2nb\") pod \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\" (UID: \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\") " Jan 26 15:17:12 crc kubenswrapper[4823]: I0126 15:17:12.704264 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86108dca-c7b6-4737-83b2-6b665cfdd9b4-inventory\") pod \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\" (UID: \"86108dca-c7b6-4737-83b2-6b665cfdd9b4\") " Jan 26 15:17:12 crc kubenswrapper[4823]: I0126 15:17:12.710256 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86108dca-c7b6-4737-83b2-6b665cfdd9b4-kube-api-access-xn2nb" (OuterVolumeSpecName: "kube-api-access-xn2nb") pod "86108dca-c7b6-4737-83b2-6b665cfdd9b4" (UID: "86108dca-c7b6-4737-83b2-6b665cfdd9b4"). InnerVolumeSpecName "kube-api-access-xn2nb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:17:12 crc kubenswrapper[4823]: I0126 15:17:12.734099 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86108dca-c7b6-4737-83b2-6b665cfdd9b4-inventory" (OuterVolumeSpecName: "inventory") pod "86108dca-c7b6-4737-83b2-6b665cfdd9b4" (UID: "86108dca-c7b6-4737-83b2-6b665cfdd9b4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:17:12 crc kubenswrapper[4823]: I0126 15:17:12.752840 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86108dca-c7b6-4737-83b2-6b665cfdd9b4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "86108dca-c7b6-4737-83b2-6b665cfdd9b4" (UID: "86108dca-c7b6-4737-83b2-6b665cfdd9b4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:17:12 crc kubenswrapper[4823]: I0126 15:17:12.807382 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/86108dca-c7b6-4737-83b2-6b665cfdd9b4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:17:12 crc kubenswrapper[4823]: I0126 15:17:12.807424 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn2nb\" (UniqueName: \"kubernetes.io/projected/86108dca-c7b6-4737-83b2-6b665cfdd9b4-kube-api-access-xn2nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:17:12 crc kubenswrapper[4823]: I0126 15:17:12.807435 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/86108dca-c7b6-4737-83b2-6b665cfdd9b4-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.141837 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" event={"ID":"86108dca-c7b6-4737-83b2-6b665cfdd9b4","Type":"ContainerDied","Data":"263e7dc8d526afa1aa6b0c58ede38d481d174c693c8de47d4867b3ae6f6b0669"} Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.141905 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.141907 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="263e7dc8d526afa1aa6b0c58ede38d481d174c693c8de47d4867b3ae6f6b0669" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.230423 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf"] Jan 26 15:17:13 crc kubenswrapper[4823]: E0126 15:17:13.231288 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86108dca-c7b6-4737-83b2-6b665cfdd9b4" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.231391 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="86108dca-c7b6-4737-83b2-6b665cfdd9b4" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.231750 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="86108dca-c7b6-4737-83b2-6b665cfdd9b4" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.232511 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.238256 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.239771 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.239840 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.240015 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.246427 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf"] Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.327134 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n2bk\" (UniqueName: \"kubernetes.io/projected/c59678d3-cd2b-493b-9cac-3e7543982453-kube-api-access-9n2bk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf\" (UID: \"c59678d3-cd2b-493b-9cac-3e7543982453\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.327476 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c59678d3-cd2b-493b-9cac-3e7543982453-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf\" (UID: \"c59678d3-cd2b-493b-9cac-3e7543982453\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.327712 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c59678d3-cd2b-493b-9cac-3e7543982453-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf\" (UID: \"c59678d3-cd2b-493b-9cac-3e7543982453\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.429842 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n2bk\" (UniqueName: \"kubernetes.io/projected/c59678d3-cd2b-493b-9cac-3e7543982453-kube-api-access-9n2bk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf\" (UID: \"c59678d3-cd2b-493b-9cac-3e7543982453\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.429929 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c59678d3-cd2b-493b-9cac-3e7543982453-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf\" (UID: \"c59678d3-cd2b-493b-9cac-3e7543982453\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.430007 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c59678d3-cd2b-493b-9cac-3e7543982453-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf\" (UID: \"c59678d3-cd2b-493b-9cac-3e7543982453\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.434309 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c59678d3-cd2b-493b-9cac-3e7543982453-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf\" (UID: \"c59678d3-cd2b-493b-9cac-3e7543982453\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.435343 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c59678d3-cd2b-493b-9cac-3e7543982453-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf\" (UID: \"c59678d3-cd2b-493b-9cac-3e7543982453\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.448926 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n2bk\" (UniqueName: \"kubernetes.io/projected/c59678d3-cd2b-493b-9cac-3e7543982453-kube-api-access-9n2bk\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf\" (UID: \"c59678d3-cd2b-493b-9cac-3e7543982453\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:17:13 crc kubenswrapper[4823]: I0126 15:17:13.552964 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:17:14 crc kubenswrapper[4823]: I0126 15:17:14.141062 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf"] Jan 26 15:17:14 crc kubenswrapper[4823]: I0126 15:17:14.154690 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" event={"ID":"c59678d3-cd2b-493b-9cac-3e7543982453","Type":"ContainerStarted","Data":"fa0f20492d28652e33712dae04e587da81c7e48970b0658dacc438eb4a7f5d62"} Jan 26 15:17:15 crc kubenswrapper[4823]: I0126 15:17:15.167818 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" event={"ID":"c59678d3-cd2b-493b-9cac-3e7543982453","Type":"ContainerStarted","Data":"93c40345a4c1e7559c83259c13a88a8671638bdc4704a9e3b1581d975c639d73"} Jan 26 15:17:15 crc kubenswrapper[4823]: I0126 15:17:15.192330 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" podStartSLOduration=1.6364880130000001 podStartE2EDuration="2.192305686s" podCreationTimestamp="2026-01-26 15:17:13 +0000 UTC" firstStartedPulling="2026-01-26 15:17:14.150924224 +0000 UTC m=+1830.836387319" lastFinishedPulling="2026-01-26 15:17:14.706741887 +0000 UTC m=+1831.392204992" observedRunningTime="2026-01-26 15:17:15.188581675 +0000 UTC m=+1831.874044840" watchObservedRunningTime="2026-01-26 15:17:15.192305686 +0000 UTC m=+1831.877768831" Jan 26 15:17:21 crc kubenswrapper[4823]: I0126 15:17:21.560464 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:17:21 crc kubenswrapper[4823]: E0126 15:17:21.561101 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:17:24 crc kubenswrapper[4823]: I0126 15:17:24.032451 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-fs2xh"] Jan 26 15:17:24 crc kubenswrapper[4823]: I0126 15:17:24.045932 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-fs2xh"] Jan 26 15:17:25 crc kubenswrapper[4823]: I0126 15:17:25.575514 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c91a8b-7077-4583-aa19-595408fb9003" path="/var/lib/kubelet/pods/c5c91a8b-7077-4583-aa19-595408fb9003/volumes" Jan 26 15:17:35 crc kubenswrapper[4823]: I0126 15:17:35.560815 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:17:35 crc kubenswrapper[4823]: E0126 15:17:35.562055 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:17:37 crc kubenswrapper[4823]: I0126 15:17:37.040620 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-nn9br"] Jan 26 15:17:37 crc kubenswrapper[4823]: I0126 15:17:37.048629 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-nn9br"] Jan 26 15:17:37 crc kubenswrapper[4823]: I0126 15:17:37.570931 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dcb08f2-c175-4602-9a45-dad635436a22" path="/var/lib/kubelet/pods/2dcb08f2-c175-4602-9a45-dad635436a22/volumes" Jan 26 15:17:38 crc kubenswrapper[4823]: I0126 15:17:38.036972 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-qx574"] Jan 26 15:17:38 crc kubenswrapper[4823]: I0126 15:17:38.048389 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-qx574"] Jan 26 15:17:39 crc kubenswrapper[4823]: I0126 15:17:39.579115 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ae97ed0-0d88-4581-ab58-b4a97f8947ad" path="/var/lib/kubelet/pods/3ae97ed0-0d88-4581-ab58-b4a97f8947ad/volumes" Jan 26 15:17:47 crc kubenswrapper[4823]: I0126 15:17:47.561291 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:17:47 crc kubenswrapper[4823]: E0126 15:17:47.562202 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:18:02 crc kubenswrapper[4823]: I0126 15:18:02.560706 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:18:02 crc kubenswrapper[4823]: E0126 15:18:02.561639 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:18:04 crc kubenswrapper[4823]: I0126 15:18:04.984398 4823 scope.go:117] "RemoveContainer" containerID="31a9626d076460a91c0c8ab4199e737a293e7dd11fdfb153b6b506e88f02e14d" Jan 26 15:18:05 crc kubenswrapper[4823]: I0126 15:18:05.010791 4823 scope.go:117] "RemoveContainer" containerID="ddae000bdecc60171ac85cea63e209b0ecfc8031aaca515b5810843d2659ff34" Jan 26 15:18:05 crc kubenswrapper[4823]: I0126 15:18:05.090211 4823 scope.go:117] "RemoveContainer" containerID="d2e0078e1fb0c6aba703a6928db4a92b7391e435673e16e0c87e303a9182265b" Jan 26 15:18:05 crc kubenswrapper[4823]: I0126 15:18:05.152822 4823 scope.go:117] "RemoveContainer" containerID="f03f17b4a7717d511696f689b0841e0f3a144d1f42906d3ec3d9f58c22e254ef" Jan 26 15:18:05 crc kubenswrapper[4823]: I0126 15:18:05.198979 4823 scope.go:117] "RemoveContainer" containerID="02d0134d4ecb0e0d29dd85b9d5c98ec01b3ac4c257702bf1299e26fd6e12286c" Jan 26 15:18:07 crc kubenswrapper[4823]: I0126 15:18:07.750288 4823 generic.go:334] "Generic (PLEG): container finished" podID="c59678d3-cd2b-493b-9cac-3e7543982453" containerID="93c40345a4c1e7559c83259c13a88a8671638bdc4704a9e3b1581d975c639d73" exitCode=0 Jan 26 15:18:07 crc kubenswrapper[4823]: I0126 15:18:07.750392 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" event={"ID":"c59678d3-cd2b-493b-9cac-3e7543982453","Type":"ContainerDied","Data":"93c40345a4c1e7559c83259c13a88a8671638bdc4704a9e3b1581d975c639d73"} Jan 26 15:18:08 crc kubenswrapper[4823]: I0126 15:18:08.048425 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-cb5a-account-create-update-27lns"] Jan 26 15:18:08 crc kubenswrapper[4823]: I0126 15:18:08.054153 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-9jh8h"] Jan 26 15:18:08 crc kubenswrapper[4823]: I0126 15:18:08.060650 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-7w4md"] Jan 26 15:18:08 crc kubenswrapper[4823]: I0126 15:18:08.066634 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-97whl"] Jan 26 15:18:08 crc kubenswrapper[4823]: I0126 15:18:08.074942 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-cb5a-account-create-update-27lns"] Jan 26 15:18:08 crc kubenswrapper[4823]: I0126 15:18:08.084649 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-9jh8h"] Jan 26 15:18:08 crc kubenswrapper[4823]: I0126 15:18:08.091663 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-97whl"] Jan 26 15:18:08 crc kubenswrapper[4823]: I0126 15:18:08.098896 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-7w4md"] Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.061919 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-da95-account-create-update-ctl6c"] Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.077869 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2831-account-create-update-f6wxs"] Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.092067 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-da95-account-create-update-ctl6c"] Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.100430 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2831-account-create-update-f6wxs"] Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.204322 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.331815 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c59678d3-cd2b-493b-9cac-3e7543982453-ssh-key-openstack-edpm-ipam\") pod \"c59678d3-cd2b-493b-9cac-3e7543982453\" (UID: \"c59678d3-cd2b-493b-9cac-3e7543982453\") " Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.331954 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c59678d3-cd2b-493b-9cac-3e7543982453-inventory\") pod \"c59678d3-cd2b-493b-9cac-3e7543982453\" (UID: \"c59678d3-cd2b-493b-9cac-3e7543982453\") " Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.332132 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n2bk\" (UniqueName: \"kubernetes.io/projected/c59678d3-cd2b-493b-9cac-3e7543982453-kube-api-access-9n2bk\") pod \"c59678d3-cd2b-493b-9cac-3e7543982453\" (UID: \"c59678d3-cd2b-493b-9cac-3e7543982453\") " Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.337555 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c59678d3-cd2b-493b-9cac-3e7543982453-kube-api-access-9n2bk" (OuterVolumeSpecName: "kube-api-access-9n2bk") pod "c59678d3-cd2b-493b-9cac-3e7543982453" (UID: "c59678d3-cd2b-493b-9cac-3e7543982453"). InnerVolumeSpecName "kube-api-access-9n2bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.359480 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c59678d3-cd2b-493b-9cac-3e7543982453-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c59678d3-cd2b-493b-9cac-3e7543982453" (UID: "c59678d3-cd2b-493b-9cac-3e7543982453"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.366975 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c59678d3-cd2b-493b-9cac-3e7543982453-inventory" (OuterVolumeSpecName: "inventory") pod "c59678d3-cd2b-493b-9cac-3e7543982453" (UID: "c59678d3-cd2b-493b-9cac-3e7543982453"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.435001 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c59678d3-cd2b-493b-9cac-3e7543982453-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.435045 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n2bk\" (UniqueName: \"kubernetes.io/projected/c59678d3-cd2b-493b-9cac-3e7543982453-kube-api-access-9n2bk\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.435059 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c59678d3-cd2b-493b-9cac-3e7543982453-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.574563 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7610ca35-36e4-45dd-b20d-e0ea80b3f62d" path="/var/lib/kubelet/pods/7610ca35-36e4-45dd-b20d-e0ea80b3f62d/volumes" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.575284 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77121317-dc3e-497c-878a-b3077fef4920" path="/var/lib/kubelet/pods/77121317-dc3e-497c-878a-b3077fef4920/volumes" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.576451 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4119b40-4749-455b-9bba-68fdf24554a0" path="/var/lib/kubelet/pods/c4119b40-4749-455b-9bba-68fdf24554a0/volumes" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.577930 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c430bc54-e863-4d5d-bb23-0f54084f28a0" path="/var/lib/kubelet/pods/c430bc54-e863-4d5d-bb23-0f54084f28a0/volumes" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.579537 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c97a7ddc-c557-4d5c-80d7-60fd099d192d" path="/var/lib/kubelet/pods/c97a7ddc-c557-4d5c-80d7-60fd099d192d/volumes" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.580568 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f" path="/var/lib/kubelet/pods/cdd4861e-57bf-42d5-a4c8-afa4dbd0a79f/volumes" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.771607 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" event={"ID":"c59678d3-cd2b-493b-9cac-3e7543982453","Type":"ContainerDied","Data":"fa0f20492d28652e33712dae04e587da81c7e48970b0658dacc438eb4a7f5d62"} Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.771944 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa0f20492d28652e33712dae04e587da81c7e48970b0658dacc438eb4a7f5d62" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.771717 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.936933 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4t8g9"] Jan 26 15:18:09 crc kubenswrapper[4823]: E0126 15:18:09.937534 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59678d3-cd2b-493b-9cac-3e7543982453" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.937599 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59678d3-cd2b-493b-9cac-3e7543982453" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.937903 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59678d3-cd2b-493b-9cac-3e7543982453" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.938853 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.941837 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.942263 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.942654 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.942897 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:18:09 crc kubenswrapper[4823]: I0126 15:18:09.944516 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4t8g9"] Jan 26 15:18:10 crc kubenswrapper[4823]: I0126 15:18:10.047108 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4t8g9\" (UID: \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\") " pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:10 crc kubenswrapper[4823]: I0126 15:18:10.047276 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fngj6\" (UniqueName: \"kubernetes.io/projected/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-kube-api-access-fngj6\") pod \"ssh-known-hosts-edpm-deployment-4t8g9\" (UID: \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\") " pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:10 crc kubenswrapper[4823]: I0126 15:18:10.047603 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4t8g9\" (UID: \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\") " pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:10 crc kubenswrapper[4823]: I0126 15:18:10.149767 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4t8g9\" (UID: \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\") " pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:10 crc kubenswrapper[4823]: I0126 15:18:10.149832 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fngj6\" (UniqueName: \"kubernetes.io/projected/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-kube-api-access-fngj6\") pod \"ssh-known-hosts-edpm-deployment-4t8g9\" (UID: \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\") " pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:10 crc kubenswrapper[4823]: I0126 15:18:10.149903 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4t8g9\" (UID: \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\") " pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:10 crc kubenswrapper[4823]: I0126 15:18:10.155468 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-4t8g9\" (UID: \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\") " pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:10 crc kubenswrapper[4823]: I0126 15:18:10.156870 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-4t8g9\" (UID: \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\") " pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:10 crc kubenswrapper[4823]: I0126 15:18:10.175977 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fngj6\" (UniqueName: \"kubernetes.io/projected/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-kube-api-access-fngj6\") pod \"ssh-known-hosts-edpm-deployment-4t8g9\" (UID: \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\") " pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:10 crc kubenswrapper[4823]: I0126 15:18:10.262287 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:10 crc kubenswrapper[4823]: I0126 15:18:10.870076 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4t8g9"] Jan 26 15:18:11 crc kubenswrapper[4823]: I0126 15:18:11.788410 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" event={"ID":"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb","Type":"ContainerStarted","Data":"f6ca9e6c6b19320d47667cea908e90f57a0c6d2eb2b907478308b067ac5018ae"} Jan 26 15:18:11 crc kubenswrapper[4823]: I0126 15:18:11.788853 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" event={"ID":"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb","Type":"ContainerStarted","Data":"2b7d6fc5522293677ffeb7dcb30161791e9d332857d2adcd7f7188415f27dd83"} Jan 26 15:18:11 crc kubenswrapper[4823]: I0126 15:18:11.815557 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" podStartSLOduration=2.385037652 podStartE2EDuration="2.815541597s" podCreationTimestamp="2026-01-26 15:18:09 +0000 UTC" firstStartedPulling="2026-01-26 15:18:10.874325348 +0000 UTC m=+1887.559788453" lastFinishedPulling="2026-01-26 15:18:11.304829293 +0000 UTC m=+1887.990292398" observedRunningTime="2026-01-26 15:18:11.809337428 +0000 UTC m=+1888.494800533" watchObservedRunningTime="2026-01-26 15:18:11.815541597 +0000 UTC m=+1888.501004702" Jan 26 15:18:16 crc kubenswrapper[4823]: I0126 15:18:16.561158 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:18:16 crc kubenswrapper[4823]: E0126 15:18:16.562349 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:18:19 crc kubenswrapper[4823]: I0126 15:18:19.866874 4823 generic.go:334] "Generic (PLEG): container finished" podID="9284efb6-1f6a-4eaf-9ec1-f8263d674ceb" containerID="f6ca9e6c6b19320d47667cea908e90f57a0c6d2eb2b907478308b067ac5018ae" exitCode=0 Jan 26 15:18:19 crc kubenswrapper[4823]: I0126 15:18:19.866999 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" event={"ID":"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb","Type":"ContainerDied","Data":"f6ca9e6c6b19320d47667cea908e90f57a0c6d2eb2b907478308b067ac5018ae"} Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.246723 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.310613 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-ssh-key-openstack-edpm-ipam\") pod \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\" (UID: \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\") " Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.310670 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fngj6\" (UniqueName: \"kubernetes.io/projected/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-kube-api-access-fngj6\") pod \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\" (UID: \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\") " Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.310741 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-inventory-0\") pod \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\" (UID: \"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb\") " Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.321134 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-kube-api-access-fngj6" (OuterVolumeSpecName: "kube-api-access-fngj6") pod "9284efb6-1f6a-4eaf-9ec1-f8263d674ceb" (UID: "9284efb6-1f6a-4eaf-9ec1-f8263d674ceb"). InnerVolumeSpecName "kube-api-access-fngj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.345105 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9284efb6-1f6a-4eaf-9ec1-f8263d674ceb" (UID: "9284efb6-1f6a-4eaf-9ec1-f8263d674ceb"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.348509 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9284efb6-1f6a-4eaf-9ec1-f8263d674ceb" (UID: "9284efb6-1f6a-4eaf-9ec1-f8263d674ceb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.412726 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fngj6\" (UniqueName: \"kubernetes.io/projected/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-kube-api-access-fngj6\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.412779 4823 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.412793 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.883392 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" event={"ID":"9284efb6-1f6a-4eaf-9ec1-f8263d674ceb","Type":"ContainerDied","Data":"2b7d6fc5522293677ffeb7dcb30161791e9d332857d2adcd7f7188415f27dd83"} Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.883702 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b7d6fc5522293677ffeb7dcb30161791e9d332857d2adcd7f7188415f27dd83" Jan 26 15:18:21 crc kubenswrapper[4823]: I0126 15:18:21.883440 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-4t8g9" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.038416 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4"] Jan 26 15:18:22 crc kubenswrapper[4823]: E0126 15:18:22.038788 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9284efb6-1f6a-4eaf-9ec1-f8263d674ceb" containerName="ssh-known-hosts-edpm-deployment" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.038806 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9284efb6-1f6a-4eaf-9ec1-f8263d674ceb" containerName="ssh-known-hosts-edpm-deployment" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.038969 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9284efb6-1f6a-4eaf-9ec1-f8263d674ceb" containerName="ssh-known-hosts-edpm-deployment" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.039547 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.045669 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.045744 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.045669 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.045939 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.047647 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4"] Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.131430 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6k4v\" (UniqueName: \"kubernetes.io/projected/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-kube-api-access-f6k4v\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qjqr4\" (UID: \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.131523 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qjqr4\" (UID: \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.131615 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qjqr4\" (UID: \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.233285 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qjqr4\" (UID: \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.233476 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6k4v\" (UniqueName: \"kubernetes.io/projected/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-kube-api-access-f6k4v\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qjqr4\" (UID: \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.233602 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qjqr4\" (UID: \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.239450 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qjqr4\" (UID: \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.249248 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6k4v\" (UniqueName: \"kubernetes.io/projected/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-kube-api-access-f6k4v\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qjqr4\" (UID: \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.251617 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-qjqr4\" (UID: \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.375240 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:22 crc kubenswrapper[4823]: I0126 15:18:22.934260 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4"] Jan 26 15:18:23 crc kubenswrapper[4823]: I0126 15:18:23.903016 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" event={"ID":"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360","Type":"ContainerStarted","Data":"2473781a4b1eb5b632977dda56370a8a67ec7a7391a5b9170cae4deb3016cb66"} Jan 26 15:18:23 crc kubenswrapper[4823]: I0126 15:18:23.903894 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" event={"ID":"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360","Type":"ContainerStarted","Data":"10de074dc6a48f560c4ef1f30322eb270b2c35f586f15297e22ed9b276cb9acf"} Jan 26 15:18:23 crc kubenswrapper[4823]: I0126 15:18:23.925265 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" podStartSLOduration=1.412180271 podStartE2EDuration="1.925235318s" podCreationTimestamp="2026-01-26 15:18:22 +0000 UTC" firstStartedPulling="2026-01-26 15:18:22.944730907 +0000 UTC m=+1899.630194022" lastFinishedPulling="2026-01-26 15:18:23.457785964 +0000 UTC m=+1900.143249069" observedRunningTime="2026-01-26 15:18:23.916278164 +0000 UTC m=+1900.601741269" watchObservedRunningTime="2026-01-26 15:18:23.925235318 +0000 UTC m=+1900.610698423" Jan 26 15:18:29 crc kubenswrapper[4823]: I0126 15:18:29.562220 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:18:29 crc kubenswrapper[4823]: E0126 15:18:29.563828 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:18:31 crc kubenswrapper[4823]: I0126 15:18:31.971770 4823 generic.go:334] "Generic (PLEG): container finished" podID="49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360" containerID="2473781a4b1eb5b632977dda56370a8a67ec7a7391a5b9170cae4deb3016cb66" exitCode=0 Jan 26 15:18:31 crc kubenswrapper[4823]: I0126 15:18:31.971869 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" event={"ID":"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360","Type":"ContainerDied","Data":"2473781a4b1eb5b632977dda56370a8a67ec7a7391a5b9170cae4deb3016cb66"} Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.372700 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.455679 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-ssh-key-openstack-edpm-ipam\") pod \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\" (UID: \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\") " Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.455741 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6k4v\" (UniqueName: \"kubernetes.io/projected/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-kube-api-access-f6k4v\") pod \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\" (UID: \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\") " Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.456027 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-inventory\") pod \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\" (UID: \"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360\") " Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.465705 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-kube-api-access-f6k4v" (OuterVolumeSpecName: "kube-api-access-f6k4v") pod "49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360" (UID: "49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360"). InnerVolumeSpecName "kube-api-access-f6k4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.483959 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360" (UID: "49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.485867 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-inventory" (OuterVolumeSpecName: "inventory") pod "49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360" (UID: "49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.559143 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.559187 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.559205 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6k4v\" (UniqueName: \"kubernetes.io/projected/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360-kube-api-access-f6k4v\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.990623 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" event={"ID":"49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360","Type":"ContainerDied","Data":"10de074dc6a48f560c4ef1f30322eb270b2c35f586f15297e22ed9b276cb9acf"} Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.990666 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10de074dc6a48f560c4ef1f30322eb270b2c35f586f15297e22ed9b276cb9acf" Jan 26 15:18:33 crc kubenswrapper[4823]: I0126 15:18:33.991138 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.103433 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6"] Jan 26 15:18:34 crc kubenswrapper[4823]: E0126 15:18:34.104110 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.104141 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.104398 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.105289 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.109265 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.109600 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.110243 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.112246 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6"] Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.115871 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.172967 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scfjb\" (UniqueName: \"kubernetes.io/projected/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-kube-api-access-scfjb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6\" (UID: \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.173049 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6\" (UID: \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.173106 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6\" (UID: \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.274896 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6\" (UID: \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.275084 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scfjb\" (UniqueName: \"kubernetes.io/projected/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-kube-api-access-scfjb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6\" (UID: \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.275107 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6\" (UID: \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.280726 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6\" (UID: \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.280883 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6\" (UID: \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.309202 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scfjb\" (UniqueName: \"kubernetes.io/projected/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-kube-api-access-scfjb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6\" (UID: \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.438255 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:18:34 crc kubenswrapper[4823]: I0126 15:18:34.975557 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6"] Jan 26 15:18:35 crc kubenswrapper[4823]: I0126 15:18:35.000615 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" event={"ID":"c5c0ecde-3daa-4c62-be28-4cb76ac205e0","Type":"ContainerStarted","Data":"66df69a8cc3bf27bce6fc79dd527cb9bbdd3944ef4782d8bc27027c6539ac4a3"} Jan 26 15:18:36 crc kubenswrapper[4823]: I0126 15:18:36.012379 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" event={"ID":"c5c0ecde-3daa-4c62-be28-4cb76ac205e0","Type":"ContainerStarted","Data":"c067516d502d4390ca5f82df69a03afa917f612d4d2d8fb0ee0a3c19c64e2df0"} Jan 26 15:18:36 crc kubenswrapper[4823]: I0126 15:18:36.037561 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" podStartSLOduration=1.416323752 podStartE2EDuration="2.03754257s" podCreationTimestamp="2026-01-26 15:18:34 +0000 UTC" firstStartedPulling="2026-01-26 15:18:34.992263552 +0000 UTC m=+1911.677726657" lastFinishedPulling="2026-01-26 15:18:35.61348237 +0000 UTC m=+1912.298945475" observedRunningTime="2026-01-26 15:18:36.033689324 +0000 UTC m=+1912.719152439" watchObservedRunningTime="2026-01-26 15:18:36.03754257 +0000 UTC m=+1912.723005675" Jan 26 15:18:38 crc kubenswrapper[4823]: I0126 15:18:38.108089 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nhdz4"] Jan 26 15:18:38 crc kubenswrapper[4823]: I0126 15:18:38.119625 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nhdz4"] Jan 26 15:18:39 crc kubenswrapper[4823]: I0126 15:18:39.573742 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcfb508b-ce02-4bc4-a362-b309ece5fd3c" path="/var/lib/kubelet/pods/dcfb508b-ce02-4bc4-a362-b309ece5fd3c/volumes" Jan 26 15:18:44 crc kubenswrapper[4823]: I0126 15:18:44.560551 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:18:45 crc kubenswrapper[4823]: I0126 15:18:45.108306 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"961c11d3241ec3d801f95f0e8e42a013e9b9699a65807be36395bc3fcc849454"} Jan 26 15:18:46 crc kubenswrapper[4823]: I0126 15:18:46.123338 4823 generic.go:334] "Generic (PLEG): container finished" podID="c5c0ecde-3daa-4c62-be28-4cb76ac205e0" containerID="c067516d502d4390ca5f82df69a03afa917f612d4d2d8fb0ee0a3c19c64e2df0" exitCode=0 Jan 26 15:18:46 crc kubenswrapper[4823]: I0126 15:18:46.123557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" event={"ID":"c5c0ecde-3daa-4c62-be28-4cb76ac205e0","Type":"ContainerDied","Data":"c067516d502d4390ca5f82df69a03afa917f612d4d2d8fb0ee0a3c19c64e2df0"} Jan 26 15:18:47 crc kubenswrapper[4823]: I0126 15:18:47.598030 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:18:47 crc kubenswrapper[4823]: I0126 15:18:47.758154 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scfjb\" (UniqueName: \"kubernetes.io/projected/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-kube-api-access-scfjb\") pod \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\" (UID: \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\") " Jan 26 15:18:47 crc kubenswrapper[4823]: I0126 15:18:47.758609 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-ssh-key-openstack-edpm-ipam\") pod \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\" (UID: \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\") " Jan 26 15:18:47 crc kubenswrapper[4823]: I0126 15:18:47.758660 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-inventory\") pod \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\" (UID: \"c5c0ecde-3daa-4c62-be28-4cb76ac205e0\") " Jan 26 15:18:47 crc kubenswrapper[4823]: I0126 15:18:47.768674 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-kube-api-access-scfjb" (OuterVolumeSpecName: "kube-api-access-scfjb") pod "c5c0ecde-3daa-4c62-be28-4cb76ac205e0" (UID: "c5c0ecde-3daa-4c62-be28-4cb76ac205e0"). InnerVolumeSpecName "kube-api-access-scfjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:18:47 crc kubenswrapper[4823]: I0126 15:18:47.790675 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c5c0ecde-3daa-4c62-be28-4cb76ac205e0" (UID: "c5c0ecde-3daa-4c62-be28-4cb76ac205e0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:18:47 crc kubenswrapper[4823]: I0126 15:18:47.799806 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-inventory" (OuterVolumeSpecName: "inventory") pod "c5c0ecde-3daa-4c62-be28-4cb76ac205e0" (UID: "c5c0ecde-3daa-4c62-be28-4cb76ac205e0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:18:47 crc kubenswrapper[4823]: I0126 15:18:47.863231 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scfjb\" (UniqueName: \"kubernetes.io/projected/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-kube-api-access-scfjb\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:47 crc kubenswrapper[4823]: I0126 15:18:47.863274 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:47 crc kubenswrapper[4823]: I0126 15:18:47.863291 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c5c0ecde-3daa-4c62-be28-4cb76ac205e0-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:18:48 crc kubenswrapper[4823]: I0126 15:18:48.143839 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" event={"ID":"c5c0ecde-3daa-4c62-be28-4cb76ac205e0","Type":"ContainerDied","Data":"66df69a8cc3bf27bce6fc79dd527cb9bbdd3944ef4782d8bc27027c6539ac4a3"} Jan 26 15:18:48 crc kubenswrapper[4823]: I0126 15:18:48.143887 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66df69a8cc3bf27bce6fc79dd527cb9bbdd3944ef4782d8bc27027c6539ac4a3" Jan 26 15:18:48 crc kubenswrapper[4823]: I0126 15:18:48.143889 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6" Jan 26 15:19:02 crc kubenswrapper[4823]: I0126 15:19:02.091111 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ddcsn"] Jan 26 15:19:02 crc kubenswrapper[4823]: I0126 15:19:02.101377 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-ddcsn"] Jan 26 15:19:03 crc kubenswrapper[4823]: I0126 15:19:03.054814 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-cfvtn"] Jan 26 15:19:03 crc kubenswrapper[4823]: I0126 15:19:03.069898 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-cfvtn"] Jan 26 15:19:03 crc kubenswrapper[4823]: I0126 15:19:03.580612 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d63df0-04ad-4cab-b8ae-e6cbb09c28e2" path="/var/lib/kubelet/pods/56d63df0-04ad-4cab-b8ae-e6cbb09c28e2/volumes" Jan 26 15:19:03 crc kubenswrapper[4823]: I0126 15:19:03.582302 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6d4313b-cd31-4952-8f17-0a5021c4adc3" path="/var/lib/kubelet/pods/b6d4313b-cd31-4952-8f17-0a5021c4adc3/volumes" Jan 26 15:19:05 crc kubenswrapper[4823]: I0126 15:19:05.320326 4823 scope.go:117] "RemoveContainer" containerID="726164a2aa520369c61dc9c8f0a5763f054a1c63c9cb2ba7134adb34bd3f3356" Jan 26 15:19:05 crc kubenswrapper[4823]: I0126 15:19:05.389191 4823 scope.go:117] "RemoveContainer" containerID="04c589ae20d714bef4a03f8fda76536be574017b4b25e6a5c8710fd5544a948c" Jan 26 15:19:05 crc kubenswrapper[4823]: I0126 15:19:05.407681 4823 scope.go:117] "RemoveContainer" containerID="6c4a85ec665ab61c8ab8adc091cff924710e41b838317144fc7522187cc7eddc" Jan 26 15:19:05 crc kubenswrapper[4823]: I0126 15:19:05.452866 4823 scope.go:117] "RemoveContainer" containerID="aa2c3fb280da60360c7695bb61cf0cf35ae2276aef775d0a6c0832363e1bdb40" Jan 26 15:19:05 crc kubenswrapper[4823]: I0126 15:19:05.518821 4823 scope.go:117] "RemoveContainer" containerID="0458cfd782b301eded2db5ebb278824d6f5179cc8b7b0cbadbb35ac7a99f3aa6" Jan 26 15:19:05 crc kubenswrapper[4823]: I0126 15:19:05.576882 4823 scope.go:117] "RemoveContainer" containerID="f644c79cade7b2e43f64288bd4a71c707b7c7294998c671c2b2cec1ddfbea982" Jan 26 15:19:05 crc kubenswrapper[4823]: I0126 15:19:05.612838 4823 scope.go:117] "RemoveContainer" containerID="9bbc7a504b2809ba634cb5b53d12302e8ae9f801da5448329bd4e42de98d60aa" Jan 26 15:19:05 crc kubenswrapper[4823]: I0126 15:19:05.637800 4823 scope.go:117] "RemoveContainer" containerID="bc55c1811af913365a6c6392936f2e02461e211ddf16ab71c100c3545cd4a870" Jan 26 15:19:05 crc kubenswrapper[4823]: I0126 15:19:05.666110 4823 scope.go:117] "RemoveContainer" containerID="48b196c1f4a1cf16909793a13f358ff60dfb0e0196ead0c7023803be3958c56a" Jan 26 15:19:47 crc kubenswrapper[4823]: I0126 15:19:47.050830 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-h7nxv"] Jan 26 15:19:47 crc kubenswrapper[4823]: I0126 15:19:47.058710 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-h7nxv"] Jan 26 15:19:47 crc kubenswrapper[4823]: I0126 15:19:47.580508 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0d3991-e82c-495e-bce4-2ce236179c32" path="/var/lib/kubelet/pods/0a0d3991-e82c-495e-bce4-2ce236179c32/volumes" Jan 26 15:20:05 crc kubenswrapper[4823]: I0126 15:20:05.850942 4823 scope.go:117] "RemoveContainer" containerID="0d2e3eeeaf3ba4095f8b73ff504f0ba331ee1f87ec82c9a9f6a6dbc3c9679d30" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.606631 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q7c44"] Jan 26 15:20:20 crc kubenswrapper[4823]: E0126 15:20:20.607814 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c0ecde-3daa-4c62-be28-4cb76ac205e0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.607839 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c0ecde-3daa-4c62-be28-4cb76ac205e0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.608117 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c0ecde-3daa-4c62-be28-4cb76ac205e0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.610095 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.628289 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7c44"] Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.649717 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mp4p\" (UniqueName: \"kubernetes.io/projected/0083556a-b71c-48fd-be49-8bce579b479d-kube-api-access-8mp4p\") pod \"redhat-marketplace-q7c44\" (UID: \"0083556a-b71c-48fd-be49-8bce579b479d\") " pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.649818 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0083556a-b71c-48fd-be49-8bce579b479d-utilities\") pod \"redhat-marketplace-q7c44\" (UID: \"0083556a-b71c-48fd-be49-8bce579b479d\") " pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.649942 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0083556a-b71c-48fd-be49-8bce579b479d-catalog-content\") pod \"redhat-marketplace-q7c44\" (UID: \"0083556a-b71c-48fd-be49-8bce579b479d\") " pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.751196 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mp4p\" (UniqueName: \"kubernetes.io/projected/0083556a-b71c-48fd-be49-8bce579b479d-kube-api-access-8mp4p\") pod \"redhat-marketplace-q7c44\" (UID: \"0083556a-b71c-48fd-be49-8bce579b479d\") " pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.751275 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0083556a-b71c-48fd-be49-8bce579b479d-utilities\") pod \"redhat-marketplace-q7c44\" (UID: \"0083556a-b71c-48fd-be49-8bce579b479d\") " pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.751385 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0083556a-b71c-48fd-be49-8bce579b479d-catalog-content\") pod \"redhat-marketplace-q7c44\" (UID: \"0083556a-b71c-48fd-be49-8bce579b479d\") " pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.751806 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0083556a-b71c-48fd-be49-8bce579b479d-catalog-content\") pod \"redhat-marketplace-q7c44\" (UID: \"0083556a-b71c-48fd-be49-8bce579b479d\") " pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.751922 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0083556a-b71c-48fd-be49-8bce579b479d-utilities\") pod \"redhat-marketplace-q7c44\" (UID: \"0083556a-b71c-48fd-be49-8bce579b479d\") " pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.772710 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mp4p\" (UniqueName: \"kubernetes.io/projected/0083556a-b71c-48fd-be49-8bce579b479d-kube-api-access-8mp4p\") pod \"redhat-marketplace-q7c44\" (UID: \"0083556a-b71c-48fd-be49-8bce579b479d\") " pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:20 crc kubenswrapper[4823]: I0126 15:20:20.930424 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:21 crc kubenswrapper[4823]: I0126 15:20:21.438680 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7c44"] Jan 26 15:20:22 crc kubenswrapper[4823]: I0126 15:20:22.056042 4823 generic.go:334] "Generic (PLEG): container finished" podID="0083556a-b71c-48fd-be49-8bce579b479d" containerID="5e63108b67d471d044cbadf13806830475f5c632167de2a69e15a5ea65b64c18" exitCode=0 Jan 26 15:20:22 crc kubenswrapper[4823]: I0126 15:20:22.056201 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7c44" event={"ID":"0083556a-b71c-48fd-be49-8bce579b479d","Type":"ContainerDied","Data":"5e63108b67d471d044cbadf13806830475f5c632167de2a69e15a5ea65b64c18"} Jan 26 15:20:22 crc kubenswrapper[4823]: I0126 15:20:22.056330 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7c44" event={"ID":"0083556a-b71c-48fd-be49-8bce579b479d","Type":"ContainerStarted","Data":"94828b3bfa46cbbdddf52865846785b833ebca273d83c35437c65e922a034df8"} Jan 26 15:20:22 crc kubenswrapper[4823]: I0126 15:20:22.058260 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:20:23 crc kubenswrapper[4823]: I0126 15:20:23.067241 4823 generic.go:334] "Generic (PLEG): container finished" podID="0083556a-b71c-48fd-be49-8bce579b479d" containerID="dff7e02f6a8271a2753880e697319780bfb2b3db82b0f656c3c2c516109274c7" exitCode=0 Jan 26 15:20:23 crc kubenswrapper[4823]: I0126 15:20:23.067286 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7c44" event={"ID":"0083556a-b71c-48fd-be49-8bce579b479d","Type":"ContainerDied","Data":"dff7e02f6a8271a2753880e697319780bfb2b3db82b0f656c3c2c516109274c7"} Jan 26 15:20:24 crc kubenswrapper[4823]: I0126 15:20:24.079734 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7c44" event={"ID":"0083556a-b71c-48fd-be49-8bce579b479d","Type":"ContainerStarted","Data":"3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177"} Jan 26 15:20:24 crc kubenswrapper[4823]: I0126 15:20:24.113145 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q7c44" podStartSLOduration=2.684219452 podStartE2EDuration="4.113125917s" podCreationTimestamp="2026-01-26 15:20:20 +0000 UTC" firstStartedPulling="2026-01-26 15:20:22.058065379 +0000 UTC m=+2018.743528484" lastFinishedPulling="2026-01-26 15:20:23.486971844 +0000 UTC m=+2020.172434949" observedRunningTime="2026-01-26 15:20:24.107426311 +0000 UTC m=+2020.792889426" watchObservedRunningTime="2026-01-26 15:20:24.113125917 +0000 UTC m=+2020.798589022" Jan 26 15:20:30 crc kubenswrapper[4823]: I0126 15:20:30.931232 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:30 crc kubenswrapper[4823]: I0126 15:20:30.931838 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:30 crc kubenswrapper[4823]: I0126 15:20:30.995114 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:31 crc kubenswrapper[4823]: I0126 15:20:31.200433 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:31 crc kubenswrapper[4823]: I0126 15:20:31.252247 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7c44"] Jan 26 15:20:33 crc kubenswrapper[4823]: I0126 15:20:33.164066 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q7c44" podUID="0083556a-b71c-48fd-be49-8bce579b479d" containerName="registry-server" containerID="cri-o://3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177" gracePeriod=2 Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.156916 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.185784 4823 generic.go:334] "Generic (PLEG): container finished" podID="0083556a-b71c-48fd-be49-8bce579b479d" containerID="3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177" exitCode=0 Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.185859 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7c44" event={"ID":"0083556a-b71c-48fd-be49-8bce579b479d","Type":"ContainerDied","Data":"3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177"} Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.185891 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7c44" event={"ID":"0083556a-b71c-48fd-be49-8bce579b479d","Type":"ContainerDied","Data":"94828b3bfa46cbbdddf52865846785b833ebca273d83c35437c65e922a034df8"} Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.185914 4823 scope.go:117] "RemoveContainer" containerID="3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.186005 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7c44" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.217726 4823 scope.go:117] "RemoveContainer" containerID="dff7e02f6a8271a2753880e697319780bfb2b3db82b0f656c3c2c516109274c7" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.241607 4823 scope.go:117] "RemoveContainer" containerID="5e63108b67d471d044cbadf13806830475f5c632167de2a69e15a5ea65b64c18" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.252651 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0083556a-b71c-48fd-be49-8bce579b479d-utilities\") pod \"0083556a-b71c-48fd-be49-8bce579b479d\" (UID: \"0083556a-b71c-48fd-be49-8bce579b479d\") " Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.252886 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mp4p\" (UniqueName: \"kubernetes.io/projected/0083556a-b71c-48fd-be49-8bce579b479d-kube-api-access-8mp4p\") pod \"0083556a-b71c-48fd-be49-8bce579b479d\" (UID: \"0083556a-b71c-48fd-be49-8bce579b479d\") " Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.252925 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0083556a-b71c-48fd-be49-8bce579b479d-catalog-content\") pod \"0083556a-b71c-48fd-be49-8bce579b479d\" (UID: \"0083556a-b71c-48fd-be49-8bce579b479d\") " Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.254064 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0083556a-b71c-48fd-be49-8bce579b479d-utilities" (OuterVolumeSpecName: "utilities") pod "0083556a-b71c-48fd-be49-8bce579b479d" (UID: "0083556a-b71c-48fd-be49-8bce579b479d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.258730 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0083556a-b71c-48fd-be49-8bce579b479d-kube-api-access-8mp4p" (OuterVolumeSpecName: "kube-api-access-8mp4p") pod "0083556a-b71c-48fd-be49-8bce579b479d" (UID: "0083556a-b71c-48fd-be49-8bce579b479d"). InnerVolumeSpecName "kube-api-access-8mp4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.281081 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0083556a-b71c-48fd-be49-8bce579b479d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0083556a-b71c-48fd-be49-8bce579b479d" (UID: "0083556a-b71c-48fd-be49-8bce579b479d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.346920 4823 scope.go:117] "RemoveContainer" containerID="3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177" Jan 26 15:20:34 crc kubenswrapper[4823]: E0126 15:20:34.347392 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177\": container with ID starting with 3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177 not found: ID does not exist" containerID="3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.347425 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177"} err="failed to get container status \"3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177\": rpc error: code = NotFound desc = could not find container \"3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177\": container with ID starting with 3c4186543eb09cb5f8feb49800834f8b185e6700b2d6907bda85d219ead42177 not found: ID does not exist" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.347448 4823 scope.go:117] "RemoveContainer" containerID="dff7e02f6a8271a2753880e697319780bfb2b3db82b0f656c3c2c516109274c7" Jan 26 15:20:34 crc kubenswrapper[4823]: E0126 15:20:34.347758 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dff7e02f6a8271a2753880e697319780bfb2b3db82b0f656c3c2c516109274c7\": container with ID starting with dff7e02f6a8271a2753880e697319780bfb2b3db82b0f656c3c2c516109274c7 not found: ID does not exist" containerID="dff7e02f6a8271a2753880e697319780bfb2b3db82b0f656c3c2c516109274c7" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.347801 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dff7e02f6a8271a2753880e697319780bfb2b3db82b0f656c3c2c516109274c7"} err="failed to get container status \"dff7e02f6a8271a2753880e697319780bfb2b3db82b0f656c3c2c516109274c7\": rpc error: code = NotFound desc = could not find container \"dff7e02f6a8271a2753880e697319780bfb2b3db82b0f656c3c2c516109274c7\": container with ID starting with dff7e02f6a8271a2753880e697319780bfb2b3db82b0f656c3c2c516109274c7 not found: ID does not exist" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.347826 4823 scope.go:117] "RemoveContainer" containerID="5e63108b67d471d044cbadf13806830475f5c632167de2a69e15a5ea65b64c18" Jan 26 15:20:34 crc kubenswrapper[4823]: E0126 15:20:34.348260 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e63108b67d471d044cbadf13806830475f5c632167de2a69e15a5ea65b64c18\": container with ID starting with 5e63108b67d471d044cbadf13806830475f5c632167de2a69e15a5ea65b64c18 not found: ID does not exist" containerID="5e63108b67d471d044cbadf13806830475f5c632167de2a69e15a5ea65b64c18" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.348296 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e63108b67d471d044cbadf13806830475f5c632167de2a69e15a5ea65b64c18"} err="failed to get container status \"5e63108b67d471d044cbadf13806830475f5c632167de2a69e15a5ea65b64c18\": rpc error: code = NotFound desc = could not find container \"5e63108b67d471d044cbadf13806830475f5c632167de2a69e15a5ea65b64c18\": container with ID starting with 5e63108b67d471d044cbadf13806830475f5c632167de2a69e15a5ea65b64c18 not found: ID does not exist" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.355052 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mp4p\" (UniqueName: \"kubernetes.io/projected/0083556a-b71c-48fd-be49-8bce579b479d-kube-api-access-8mp4p\") on node \"crc\" DevicePath \"\"" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.355087 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0083556a-b71c-48fd-be49-8bce579b479d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.355103 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0083556a-b71c-48fd-be49-8bce579b479d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.533057 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7c44"] Jan 26 15:20:34 crc kubenswrapper[4823]: I0126 15:20:34.541468 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7c44"] Jan 26 15:20:35 crc kubenswrapper[4823]: I0126 15:20:35.574221 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0083556a-b71c-48fd-be49-8bce579b479d" path="/var/lib/kubelet/pods/0083556a-b71c-48fd-be49-8bce579b479d/volumes" Jan 26 15:21:04 crc kubenswrapper[4823]: I0126 15:21:04.508082 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:21:04 crc kubenswrapper[4823]: I0126 15:21:04.508712 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:21:34 crc kubenswrapper[4823]: I0126 15:21:34.508974 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:21:34 crc kubenswrapper[4823]: I0126 15:21:34.510199 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.756875 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x7ztj"] Jan 26 15:22:00 crc kubenswrapper[4823]: E0126 15:22:00.762881 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0083556a-b71c-48fd-be49-8bce579b479d" containerName="extract-utilities" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.762928 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0083556a-b71c-48fd-be49-8bce579b479d" containerName="extract-utilities" Jan 26 15:22:00 crc kubenswrapper[4823]: E0126 15:22:00.762981 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0083556a-b71c-48fd-be49-8bce579b479d" containerName="registry-server" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.762991 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0083556a-b71c-48fd-be49-8bce579b479d" containerName="registry-server" Jan 26 15:22:00 crc kubenswrapper[4823]: E0126 15:22:00.763030 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0083556a-b71c-48fd-be49-8bce579b479d" containerName="extract-content" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.763041 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0083556a-b71c-48fd-be49-8bce579b479d" containerName="extract-content" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.763560 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0083556a-b71c-48fd-be49-8bce579b479d" containerName="registry-server" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.769543 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.783015 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7ztj"] Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.790683 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-catalog-content\") pod \"redhat-operators-x7ztj\" (UID: \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\") " pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.790846 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-utilities\") pod \"redhat-operators-x7ztj\" (UID: \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\") " pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.790911 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5z8s\" (UniqueName: \"kubernetes.io/projected/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-kube-api-access-z5z8s\") pod \"redhat-operators-x7ztj\" (UID: \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\") " pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.892545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-catalog-content\") pod \"redhat-operators-x7ztj\" (UID: \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\") " pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.892648 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-utilities\") pod \"redhat-operators-x7ztj\" (UID: \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\") " pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.892684 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5z8s\" (UniqueName: \"kubernetes.io/projected/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-kube-api-access-z5z8s\") pod \"redhat-operators-x7ztj\" (UID: \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\") " pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.893398 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-catalog-content\") pod \"redhat-operators-x7ztj\" (UID: \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\") " pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.893616 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-utilities\") pod \"redhat-operators-x7ztj\" (UID: \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\") " pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:00 crc kubenswrapper[4823]: I0126 15:22:00.916475 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5z8s\" (UniqueName: \"kubernetes.io/projected/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-kube-api-access-z5z8s\") pod \"redhat-operators-x7ztj\" (UID: \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\") " pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:01 crc kubenswrapper[4823]: I0126 15:22:01.104424 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:01 crc kubenswrapper[4823]: I0126 15:22:01.591069 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7ztj"] Jan 26 15:22:02 crc kubenswrapper[4823]: I0126 15:22:02.147925 4823 generic.go:334] "Generic (PLEG): container finished" podID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" containerID="6ebb055c3a2c8986f6d7a55fa67f4ae5d6dcefc5a5b4df38e37f9cefe37d79e6" exitCode=0 Jan 26 15:22:02 crc kubenswrapper[4823]: I0126 15:22:02.148020 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7ztj" event={"ID":"34bbd717-5e33-46cd-a547-1dd4fca3cdcc","Type":"ContainerDied","Data":"6ebb055c3a2c8986f6d7a55fa67f4ae5d6dcefc5a5b4df38e37f9cefe37d79e6"} Jan 26 15:22:02 crc kubenswrapper[4823]: I0126 15:22:02.148242 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7ztj" event={"ID":"34bbd717-5e33-46cd-a547-1dd4fca3cdcc","Type":"ContainerStarted","Data":"04aabe012ed41890134bdc00b9a7e7199efa69a7d4c3f11ad1957dff77ff6560"} Jan 26 15:22:04 crc kubenswrapper[4823]: I0126 15:22:04.166325 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7ztj" event={"ID":"34bbd717-5e33-46cd-a547-1dd4fca3cdcc","Type":"ContainerStarted","Data":"680ee3692384b6c17eaf02c8e9834c40f26aaa2b75a238d54944f0643daa0d7f"} Jan 26 15:22:04 crc kubenswrapper[4823]: I0126 15:22:04.507811 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:22:04 crc kubenswrapper[4823]: I0126 15:22:04.507887 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:22:04 crc kubenswrapper[4823]: I0126 15:22:04.507942 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 15:22:04 crc kubenswrapper[4823]: I0126 15:22:04.508823 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"961c11d3241ec3d801f95f0e8e42a013e9b9699a65807be36395bc3fcc849454"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:22:04 crc kubenswrapper[4823]: I0126 15:22:04.508908 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://961c11d3241ec3d801f95f0e8e42a013e9b9699a65807be36395bc3fcc849454" gracePeriod=600 Jan 26 15:22:05 crc kubenswrapper[4823]: I0126 15:22:05.177863 4823 generic.go:334] "Generic (PLEG): container finished" podID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" containerID="680ee3692384b6c17eaf02c8e9834c40f26aaa2b75a238d54944f0643daa0d7f" exitCode=0 Jan 26 15:22:05 crc kubenswrapper[4823]: I0126 15:22:05.177921 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7ztj" event={"ID":"34bbd717-5e33-46cd-a547-1dd4fca3cdcc","Type":"ContainerDied","Data":"680ee3692384b6c17eaf02c8e9834c40f26aaa2b75a238d54944f0643daa0d7f"} Jan 26 15:22:08 crc kubenswrapper[4823]: I0126 15:22:08.222533 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="961c11d3241ec3d801f95f0e8e42a013e9b9699a65807be36395bc3fcc849454" exitCode=0 Jan 26 15:22:08 crc kubenswrapper[4823]: I0126 15:22:08.223490 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"961c11d3241ec3d801f95f0e8e42a013e9b9699a65807be36395bc3fcc849454"} Jan 26 15:22:08 crc kubenswrapper[4823]: I0126 15:22:08.223545 4823 scope.go:117] "RemoveContainer" containerID="9e1ac217c0f3a76dcc50b6adb0f28931e78751a000f12dca3993def0ad9fc123" Jan 26 15:22:09 crc kubenswrapper[4823]: I0126 15:22:09.236789 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7ztj" event={"ID":"34bbd717-5e33-46cd-a547-1dd4fca3cdcc","Type":"ContainerStarted","Data":"ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204"} Jan 26 15:22:09 crc kubenswrapper[4823]: I0126 15:22:09.239848 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172"} Jan 26 15:22:09 crc kubenswrapper[4823]: I0126 15:22:09.259186 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x7ztj" podStartSLOduration=3.381013323 podStartE2EDuration="9.259167347s" podCreationTimestamp="2026-01-26 15:22:00 +0000 UTC" firstStartedPulling="2026-01-26 15:22:02.149781761 +0000 UTC m=+2118.835244866" lastFinishedPulling="2026-01-26 15:22:08.027935785 +0000 UTC m=+2124.713398890" observedRunningTime="2026-01-26 15:22:09.253824952 +0000 UTC m=+2125.939288057" watchObservedRunningTime="2026-01-26 15:22:09.259167347 +0000 UTC m=+2125.944630452" Jan 26 15:22:11 crc kubenswrapper[4823]: I0126 15:22:11.105135 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:11 crc kubenswrapper[4823]: I0126 15:22:11.106093 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:12 crc kubenswrapper[4823]: I0126 15:22:12.156210 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7ztj" podUID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" containerName="registry-server" probeResult="failure" output=< Jan 26 15:22:12 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 15:22:12 crc kubenswrapper[4823]: > Jan 26 15:22:21 crc kubenswrapper[4823]: I0126 15:22:21.145833 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:21 crc kubenswrapper[4823]: I0126 15:22:21.189045 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:21 crc kubenswrapper[4823]: I0126 15:22:21.381523 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7ztj"] Jan 26 15:22:22 crc kubenswrapper[4823]: I0126 15:22:22.453317 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x7ztj" podUID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" containerName="registry-server" containerID="cri-o://ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204" gracePeriod=2 Jan 26 15:22:22 crc kubenswrapper[4823]: I0126 15:22:22.895745 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.033601 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5z8s\" (UniqueName: \"kubernetes.io/projected/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-kube-api-access-z5z8s\") pod \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\" (UID: \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\") " Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.033722 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-catalog-content\") pod \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\" (UID: \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\") " Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.033765 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-utilities\") pod \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\" (UID: \"34bbd717-5e33-46cd-a547-1dd4fca3cdcc\") " Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.034955 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-utilities" (OuterVolumeSpecName: "utilities") pod "34bbd717-5e33-46cd-a547-1dd4fca3cdcc" (UID: "34bbd717-5e33-46cd-a547-1dd4fca3cdcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.051872 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-kube-api-access-z5z8s" (OuterVolumeSpecName: "kube-api-access-z5z8s") pod "34bbd717-5e33-46cd-a547-1dd4fca3cdcc" (UID: "34bbd717-5e33-46cd-a547-1dd4fca3cdcc"). InnerVolumeSpecName "kube-api-access-z5z8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.135849 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5z8s\" (UniqueName: \"kubernetes.io/projected/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-kube-api-access-z5z8s\") on node \"crc\" DevicePath \"\"" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.135952 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.203076 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34bbd717-5e33-46cd-a547-1dd4fca3cdcc" (UID: "34bbd717-5e33-46cd-a547-1dd4fca3cdcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.238188 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34bbd717-5e33-46cd-a547-1dd4fca3cdcc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.464351 4823 generic.go:334] "Generic (PLEG): container finished" podID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" containerID="ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204" exitCode=0 Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.464439 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7ztj" event={"ID":"34bbd717-5e33-46cd-a547-1dd4fca3cdcc","Type":"ContainerDied","Data":"ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204"} Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.464475 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7ztj" event={"ID":"34bbd717-5e33-46cd-a547-1dd4fca3cdcc","Type":"ContainerDied","Data":"04aabe012ed41890134bdc00b9a7e7199efa69a7d4c3f11ad1957dff77ff6560"} Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.464472 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7ztj" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.464550 4823 scope.go:117] "RemoveContainer" containerID="ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.483528 4823 scope.go:117] "RemoveContainer" containerID="680ee3692384b6c17eaf02c8e9834c40f26aaa2b75a238d54944f0643daa0d7f" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.504763 4823 scope.go:117] "RemoveContainer" containerID="6ebb055c3a2c8986f6d7a55fa67f4ae5d6dcefc5a5b4df38e37f9cefe37d79e6" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.509465 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7ztj"] Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.520158 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x7ztj"] Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.553783 4823 scope.go:117] "RemoveContainer" containerID="ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204" Jan 26 15:22:23 crc kubenswrapper[4823]: E0126 15:22:23.558307 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204\": container with ID starting with ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204 not found: ID does not exist" containerID="ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.558379 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204"} err="failed to get container status \"ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204\": rpc error: code = NotFound desc = could not find container \"ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204\": container with ID starting with ed745bea32c9898a9b5ef08e7d517baebdc07ff29219ed668a6f3da80327a204 not found: ID does not exist" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.558412 4823 scope.go:117] "RemoveContainer" containerID="680ee3692384b6c17eaf02c8e9834c40f26aaa2b75a238d54944f0643daa0d7f" Jan 26 15:22:23 crc kubenswrapper[4823]: E0126 15:22:23.558879 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"680ee3692384b6c17eaf02c8e9834c40f26aaa2b75a238d54944f0643daa0d7f\": container with ID starting with 680ee3692384b6c17eaf02c8e9834c40f26aaa2b75a238d54944f0643daa0d7f not found: ID does not exist" containerID="680ee3692384b6c17eaf02c8e9834c40f26aaa2b75a238d54944f0643daa0d7f" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.558928 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"680ee3692384b6c17eaf02c8e9834c40f26aaa2b75a238d54944f0643daa0d7f"} err="failed to get container status \"680ee3692384b6c17eaf02c8e9834c40f26aaa2b75a238d54944f0643daa0d7f\": rpc error: code = NotFound desc = could not find container \"680ee3692384b6c17eaf02c8e9834c40f26aaa2b75a238d54944f0643daa0d7f\": container with ID starting with 680ee3692384b6c17eaf02c8e9834c40f26aaa2b75a238d54944f0643daa0d7f not found: ID does not exist" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.558959 4823 scope.go:117] "RemoveContainer" containerID="6ebb055c3a2c8986f6d7a55fa67f4ae5d6dcefc5a5b4df38e37f9cefe37d79e6" Jan 26 15:22:23 crc kubenswrapper[4823]: E0126 15:22:23.559390 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ebb055c3a2c8986f6d7a55fa67f4ae5d6dcefc5a5b4df38e37f9cefe37d79e6\": container with ID starting with 6ebb055c3a2c8986f6d7a55fa67f4ae5d6dcefc5a5b4df38e37f9cefe37d79e6 not found: ID does not exist" containerID="6ebb055c3a2c8986f6d7a55fa67f4ae5d6dcefc5a5b4df38e37f9cefe37d79e6" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.559427 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ebb055c3a2c8986f6d7a55fa67f4ae5d6dcefc5a5b4df38e37f9cefe37d79e6"} err="failed to get container status \"6ebb055c3a2c8986f6d7a55fa67f4ae5d6dcefc5a5b4df38e37f9cefe37d79e6\": rpc error: code = NotFound desc = could not find container \"6ebb055c3a2c8986f6d7a55fa67f4ae5d6dcefc5a5b4df38e37f9cefe37d79e6\": container with ID starting with 6ebb055c3a2c8986f6d7a55fa67f4ae5d6dcefc5a5b4df38e37f9cefe37d79e6 not found: ID does not exist" Jan 26 15:22:23 crc kubenswrapper[4823]: I0126 15:22:23.573028 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" path="/var/lib/kubelet/pods/34bbd717-5e33-46cd-a547-1dd4fca3cdcc/volumes" Jan 26 15:23:17 crc kubenswrapper[4823]: I0126 15:23:17.885931 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tz55z"] Jan 26 15:23:17 crc kubenswrapper[4823]: E0126 15:23:17.887312 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" containerName="extract-content" Jan 26 15:23:17 crc kubenswrapper[4823]: I0126 15:23:17.887331 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" containerName="extract-content" Jan 26 15:23:17 crc kubenswrapper[4823]: E0126 15:23:17.887388 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" containerName="registry-server" Jan 26 15:23:17 crc kubenswrapper[4823]: I0126 15:23:17.887395 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" containerName="registry-server" Jan 26 15:23:17 crc kubenswrapper[4823]: E0126 15:23:17.887418 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" containerName="extract-utilities" Jan 26 15:23:17 crc kubenswrapper[4823]: I0126 15:23:17.887424 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" containerName="extract-utilities" Jan 26 15:23:17 crc kubenswrapper[4823]: I0126 15:23:17.887640 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="34bbd717-5e33-46cd-a547-1dd4fca3cdcc" containerName="registry-server" Jan 26 15:23:17 crc kubenswrapper[4823]: I0126 15:23:17.889127 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:17 crc kubenswrapper[4823]: I0126 15:23:17.902741 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tz55z"] Jan 26 15:23:17 crc kubenswrapper[4823]: I0126 15:23:17.987994 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14833acf-3750-484f-b1ea-2854ae92a71d-utilities\") pod \"community-operators-tz55z\" (UID: \"14833acf-3750-484f-b1ea-2854ae92a71d\") " pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:17 crc kubenswrapper[4823]: I0126 15:23:17.988063 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14833acf-3750-484f-b1ea-2854ae92a71d-catalog-content\") pod \"community-operators-tz55z\" (UID: \"14833acf-3750-484f-b1ea-2854ae92a71d\") " pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:17 crc kubenswrapper[4823]: I0126 15:23:17.988352 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw47b\" (UniqueName: \"kubernetes.io/projected/14833acf-3750-484f-b1ea-2854ae92a71d-kube-api-access-tw47b\") pod \"community-operators-tz55z\" (UID: \"14833acf-3750-484f-b1ea-2854ae92a71d\") " pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:18 crc kubenswrapper[4823]: I0126 15:23:18.091871 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14833acf-3750-484f-b1ea-2854ae92a71d-utilities\") pod \"community-operators-tz55z\" (UID: \"14833acf-3750-484f-b1ea-2854ae92a71d\") " pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:18 crc kubenswrapper[4823]: I0126 15:23:18.091941 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14833acf-3750-484f-b1ea-2854ae92a71d-catalog-content\") pod \"community-operators-tz55z\" (UID: \"14833acf-3750-484f-b1ea-2854ae92a71d\") " pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:18 crc kubenswrapper[4823]: I0126 15:23:18.092066 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw47b\" (UniqueName: \"kubernetes.io/projected/14833acf-3750-484f-b1ea-2854ae92a71d-kube-api-access-tw47b\") pod \"community-operators-tz55z\" (UID: \"14833acf-3750-484f-b1ea-2854ae92a71d\") " pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:18 crc kubenswrapper[4823]: I0126 15:23:18.092647 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14833acf-3750-484f-b1ea-2854ae92a71d-catalog-content\") pod \"community-operators-tz55z\" (UID: \"14833acf-3750-484f-b1ea-2854ae92a71d\") " pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:18 crc kubenswrapper[4823]: I0126 15:23:18.092894 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14833acf-3750-484f-b1ea-2854ae92a71d-utilities\") pod \"community-operators-tz55z\" (UID: \"14833acf-3750-484f-b1ea-2854ae92a71d\") " pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:18 crc kubenswrapper[4823]: I0126 15:23:18.121181 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw47b\" (UniqueName: \"kubernetes.io/projected/14833acf-3750-484f-b1ea-2854ae92a71d-kube-api-access-tw47b\") pod \"community-operators-tz55z\" (UID: \"14833acf-3750-484f-b1ea-2854ae92a71d\") " pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:18 crc kubenswrapper[4823]: I0126 15:23:18.216321 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:18 crc kubenswrapper[4823]: I0126 15:23:18.831841 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tz55z"] Jan 26 15:23:18 crc kubenswrapper[4823]: I0126 15:23:18.951660 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz55z" event={"ID":"14833acf-3750-484f-b1ea-2854ae92a71d","Type":"ContainerStarted","Data":"d85f6d2427049163cf15e13e1851a0c74c81086cd8406444c0acbb74a0d7ec2f"} Jan 26 15:23:19 crc kubenswrapper[4823]: I0126 15:23:19.967828 4823 generic.go:334] "Generic (PLEG): container finished" podID="14833acf-3750-484f-b1ea-2854ae92a71d" containerID="84e46b8f78b36536db5387bdee20bf748048f011fc42dbaaf18325a2bbd9e01d" exitCode=0 Jan 26 15:23:19 crc kubenswrapper[4823]: I0126 15:23:19.967914 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz55z" event={"ID":"14833acf-3750-484f-b1ea-2854ae92a71d","Type":"ContainerDied","Data":"84e46b8f78b36536db5387bdee20bf748048f011fc42dbaaf18325a2bbd9e01d"} Jan 26 15:23:21 crc kubenswrapper[4823]: I0126 15:23:21.991508 4823 generic.go:334] "Generic (PLEG): container finished" podID="14833acf-3750-484f-b1ea-2854ae92a71d" containerID="f6ed5a1761f6232d5652f6e3f1d1b0e31ed600f6d9472187d8be559caaec73a7" exitCode=0 Jan 26 15:23:21 crc kubenswrapper[4823]: I0126 15:23:21.991739 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz55z" event={"ID":"14833acf-3750-484f-b1ea-2854ae92a71d","Type":"ContainerDied","Data":"f6ed5a1761f6232d5652f6e3f1d1b0e31ed600f6d9472187d8be559caaec73a7"} Jan 26 15:23:23 crc kubenswrapper[4823]: I0126 15:23:23.002149 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz55z" event={"ID":"14833acf-3750-484f-b1ea-2854ae92a71d","Type":"ContainerStarted","Data":"ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505"} Jan 26 15:23:28 crc kubenswrapper[4823]: I0126 15:23:28.217479 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:28 crc kubenswrapper[4823]: I0126 15:23:28.217955 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:28 crc kubenswrapper[4823]: I0126 15:23:28.270663 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:28 crc kubenswrapper[4823]: I0126 15:23:28.298123 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tz55z" podStartSLOduration=8.865647781 podStartE2EDuration="11.298099189s" podCreationTimestamp="2026-01-26 15:23:17 +0000 UTC" firstStartedPulling="2026-01-26 15:23:19.970390872 +0000 UTC m=+2196.655853987" lastFinishedPulling="2026-01-26 15:23:22.40284228 +0000 UTC m=+2199.088305395" observedRunningTime="2026-01-26 15:23:23.028692888 +0000 UTC m=+2199.714155993" watchObservedRunningTime="2026-01-26 15:23:28.298099189 +0000 UTC m=+2204.983562294" Jan 26 15:23:29 crc kubenswrapper[4823]: I0126 15:23:29.143223 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:29 crc kubenswrapper[4823]: I0126 15:23:29.201405 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tz55z"] Jan 26 15:23:31 crc kubenswrapper[4823]: I0126 15:23:31.095078 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tz55z" podUID="14833acf-3750-484f-b1ea-2854ae92a71d" containerName="registry-server" containerID="cri-o://ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505" gracePeriod=2 Jan 26 15:23:31 crc kubenswrapper[4823]: I0126 15:23:31.609105 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:31 crc kubenswrapper[4823]: I0126 15:23:31.652125 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14833acf-3750-484f-b1ea-2854ae92a71d-utilities\") pod \"14833acf-3750-484f-b1ea-2854ae92a71d\" (UID: \"14833acf-3750-484f-b1ea-2854ae92a71d\") " Jan 26 15:23:31 crc kubenswrapper[4823]: I0126 15:23:31.652259 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14833acf-3750-484f-b1ea-2854ae92a71d-catalog-content\") pod \"14833acf-3750-484f-b1ea-2854ae92a71d\" (UID: \"14833acf-3750-484f-b1ea-2854ae92a71d\") " Jan 26 15:23:31 crc kubenswrapper[4823]: I0126 15:23:31.652302 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw47b\" (UniqueName: \"kubernetes.io/projected/14833acf-3750-484f-b1ea-2854ae92a71d-kube-api-access-tw47b\") pod \"14833acf-3750-484f-b1ea-2854ae92a71d\" (UID: \"14833acf-3750-484f-b1ea-2854ae92a71d\") " Jan 26 15:23:31 crc kubenswrapper[4823]: I0126 15:23:31.654323 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14833acf-3750-484f-b1ea-2854ae92a71d-utilities" (OuterVolumeSpecName: "utilities") pod "14833acf-3750-484f-b1ea-2854ae92a71d" (UID: "14833acf-3750-484f-b1ea-2854ae92a71d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:23:31 crc kubenswrapper[4823]: I0126 15:23:31.662661 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14833acf-3750-484f-b1ea-2854ae92a71d-kube-api-access-tw47b" (OuterVolumeSpecName: "kube-api-access-tw47b") pod "14833acf-3750-484f-b1ea-2854ae92a71d" (UID: "14833acf-3750-484f-b1ea-2854ae92a71d"). InnerVolumeSpecName "kube-api-access-tw47b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:23:31 crc kubenswrapper[4823]: I0126 15:23:31.754759 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw47b\" (UniqueName: \"kubernetes.io/projected/14833acf-3750-484f-b1ea-2854ae92a71d-kube-api-access-tw47b\") on node \"crc\" DevicePath \"\"" Jan 26 15:23:31 crc kubenswrapper[4823]: I0126 15:23:31.754820 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14833acf-3750-484f-b1ea-2854ae92a71d-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.105761 4823 generic.go:334] "Generic (PLEG): container finished" podID="14833acf-3750-484f-b1ea-2854ae92a71d" containerID="ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505" exitCode=0 Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.105806 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz55z" event={"ID":"14833acf-3750-484f-b1ea-2854ae92a71d","Type":"ContainerDied","Data":"ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505"} Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.105858 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tz55z" event={"ID":"14833acf-3750-484f-b1ea-2854ae92a71d","Type":"ContainerDied","Data":"d85f6d2427049163cf15e13e1851a0c74c81086cd8406444c0acbb74a0d7ec2f"} Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.105857 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tz55z" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.105877 4823 scope.go:117] "RemoveContainer" containerID="ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.129286 4823 scope.go:117] "RemoveContainer" containerID="f6ed5a1761f6232d5652f6e3f1d1b0e31ed600f6d9472187d8be559caaec73a7" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.133194 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14833acf-3750-484f-b1ea-2854ae92a71d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14833acf-3750-484f-b1ea-2854ae92a71d" (UID: "14833acf-3750-484f-b1ea-2854ae92a71d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.151164 4823 scope.go:117] "RemoveContainer" containerID="84e46b8f78b36536db5387bdee20bf748048f011fc42dbaaf18325a2bbd9e01d" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.162020 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14833acf-3750-484f-b1ea-2854ae92a71d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.201050 4823 scope.go:117] "RemoveContainer" containerID="ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505" Jan 26 15:23:32 crc kubenswrapper[4823]: E0126 15:23:32.201881 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505\": container with ID starting with ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505 not found: ID does not exist" containerID="ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.201972 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505"} err="failed to get container status \"ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505\": rpc error: code = NotFound desc = could not find container \"ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505\": container with ID starting with ae55f982dae5a9c9ec780fb4ae072ebb1c8474b029b06e0fa14a5c6bfb64e505 not found: ID does not exist" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.202009 4823 scope.go:117] "RemoveContainer" containerID="f6ed5a1761f6232d5652f6e3f1d1b0e31ed600f6d9472187d8be559caaec73a7" Jan 26 15:23:32 crc kubenswrapper[4823]: E0126 15:23:32.202505 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6ed5a1761f6232d5652f6e3f1d1b0e31ed600f6d9472187d8be559caaec73a7\": container with ID starting with f6ed5a1761f6232d5652f6e3f1d1b0e31ed600f6d9472187d8be559caaec73a7 not found: ID does not exist" containerID="f6ed5a1761f6232d5652f6e3f1d1b0e31ed600f6d9472187d8be559caaec73a7" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.202596 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6ed5a1761f6232d5652f6e3f1d1b0e31ed600f6d9472187d8be559caaec73a7"} err="failed to get container status \"f6ed5a1761f6232d5652f6e3f1d1b0e31ed600f6d9472187d8be559caaec73a7\": rpc error: code = NotFound desc = could not find container \"f6ed5a1761f6232d5652f6e3f1d1b0e31ed600f6d9472187d8be559caaec73a7\": container with ID starting with f6ed5a1761f6232d5652f6e3f1d1b0e31ed600f6d9472187d8be559caaec73a7 not found: ID does not exist" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.202653 4823 scope.go:117] "RemoveContainer" containerID="84e46b8f78b36536db5387bdee20bf748048f011fc42dbaaf18325a2bbd9e01d" Jan 26 15:23:32 crc kubenswrapper[4823]: E0126 15:23:32.204002 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e46b8f78b36536db5387bdee20bf748048f011fc42dbaaf18325a2bbd9e01d\": container with ID starting with 84e46b8f78b36536db5387bdee20bf748048f011fc42dbaaf18325a2bbd9e01d not found: ID does not exist" containerID="84e46b8f78b36536db5387bdee20bf748048f011fc42dbaaf18325a2bbd9e01d" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.204074 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e46b8f78b36536db5387bdee20bf748048f011fc42dbaaf18325a2bbd9e01d"} err="failed to get container status \"84e46b8f78b36536db5387bdee20bf748048f011fc42dbaaf18325a2bbd9e01d\": rpc error: code = NotFound desc = could not find container \"84e46b8f78b36536db5387bdee20bf748048f011fc42dbaaf18325a2bbd9e01d\": container with ID starting with 84e46b8f78b36536db5387bdee20bf748048f011fc42dbaaf18325a2bbd9e01d not found: ID does not exist" Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.443581 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tz55z"] Jan 26 15:23:32 crc kubenswrapper[4823]: I0126 15:23:32.450162 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tz55z"] Jan 26 15:23:33 crc kubenswrapper[4823]: I0126 15:23:33.574712 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14833acf-3750-484f-b1ea-2854ae92a71d" path="/var/lib/kubelet/pods/14833acf-3750-484f-b1ea-2854ae92a71d/volumes" Jan 26 15:23:36 crc kubenswrapper[4823]: I0126 15:23:36.932968 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x6929"] Jan 26 15:23:36 crc kubenswrapper[4823]: E0126 15:23:36.933742 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14833acf-3750-484f-b1ea-2854ae92a71d" containerName="extract-utilities" Jan 26 15:23:36 crc kubenswrapper[4823]: I0126 15:23:36.933756 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="14833acf-3750-484f-b1ea-2854ae92a71d" containerName="extract-utilities" Jan 26 15:23:36 crc kubenswrapper[4823]: E0126 15:23:36.933789 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14833acf-3750-484f-b1ea-2854ae92a71d" containerName="extract-content" Jan 26 15:23:36 crc kubenswrapper[4823]: I0126 15:23:36.933795 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="14833acf-3750-484f-b1ea-2854ae92a71d" containerName="extract-content" Jan 26 15:23:36 crc kubenswrapper[4823]: E0126 15:23:36.933807 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14833acf-3750-484f-b1ea-2854ae92a71d" containerName="registry-server" Jan 26 15:23:36 crc kubenswrapper[4823]: I0126 15:23:36.933813 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="14833acf-3750-484f-b1ea-2854ae92a71d" containerName="registry-server" Jan 26 15:23:36 crc kubenswrapper[4823]: I0126 15:23:36.934218 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="14833acf-3750-484f-b1ea-2854ae92a71d" containerName="registry-server" Jan 26 15:23:36 crc kubenswrapper[4823]: I0126 15:23:36.935676 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:36 crc kubenswrapper[4823]: I0126 15:23:36.947034 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6929"] Jan 26 15:23:36 crc kubenswrapper[4823]: I0126 15:23:36.952334 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7489829f-2f35-44a5-8a21-452496a51db9-utilities\") pod \"certified-operators-x6929\" (UID: \"7489829f-2f35-44a5-8a21-452496a51db9\") " pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:36 crc kubenswrapper[4823]: I0126 15:23:36.952411 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7489829f-2f35-44a5-8a21-452496a51db9-catalog-content\") pod \"certified-operators-x6929\" (UID: \"7489829f-2f35-44a5-8a21-452496a51db9\") " pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:36 crc kubenswrapper[4823]: I0126 15:23:36.952445 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkf4z\" (UniqueName: \"kubernetes.io/projected/7489829f-2f35-44a5-8a21-452496a51db9-kube-api-access-wkf4z\") pod \"certified-operators-x6929\" (UID: \"7489829f-2f35-44a5-8a21-452496a51db9\") " pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:37 crc kubenswrapper[4823]: I0126 15:23:37.053237 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7489829f-2f35-44a5-8a21-452496a51db9-utilities\") pod \"certified-operators-x6929\" (UID: \"7489829f-2f35-44a5-8a21-452496a51db9\") " pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:37 crc kubenswrapper[4823]: I0126 15:23:37.053304 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7489829f-2f35-44a5-8a21-452496a51db9-catalog-content\") pod \"certified-operators-x6929\" (UID: \"7489829f-2f35-44a5-8a21-452496a51db9\") " pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:37 crc kubenswrapper[4823]: I0126 15:23:37.053338 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkf4z\" (UniqueName: \"kubernetes.io/projected/7489829f-2f35-44a5-8a21-452496a51db9-kube-api-access-wkf4z\") pod \"certified-operators-x6929\" (UID: \"7489829f-2f35-44a5-8a21-452496a51db9\") " pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:37 crc kubenswrapper[4823]: I0126 15:23:37.054070 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7489829f-2f35-44a5-8a21-452496a51db9-catalog-content\") pod \"certified-operators-x6929\" (UID: \"7489829f-2f35-44a5-8a21-452496a51db9\") " pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:37 crc kubenswrapper[4823]: I0126 15:23:37.054065 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7489829f-2f35-44a5-8a21-452496a51db9-utilities\") pod \"certified-operators-x6929\" (UID: \"7489829f-2f35-44a5-8a21-452496a51db9\") " pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:37 crc kubenswrapper[4823]: I0126 15:23:37.076935 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkf4z\" (UniqueName: \"kubernetes.io/projected/7489829f-2f35-44a5-8a21-452496a51db9-kube-api-access-wkf4z\") pod \"certified-operators-x6929\" (UID: \"7489829f-2f35-44a5-8a21-452496a51db9\") " pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:37 crc kubenswrapper[4823]: I0126 15:23:37.254819 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:37 crc kubenswrapper[4823]: I0126 15:23:37.646074 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x6929"] Jan 26 15:23:38 crc kubenswrapper[4823]: I0126 15:23:38.189943 4823 generic.go:334] "Generic (PLEG): container finished" podID="7489829f-2f35-44a5-8a21-452496a51db9" containerID="35a07c243558362ab7549f3fa08d1773212c832ac02119d3fd069e4cd7cc10ad" exitCode=0 Jan 26 15:23:38 crc kubenswrapper[4823]: I0126 15:23:38.190048 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6929" event={"ID":"7489829f-2f35-44a5-8a21-452496a51db9","Type":"ContainerDied","Data":"35a07c243558362ab7549f3fa08d1773212c832ac02119d3fd069e4cd7cc10ad"} Jan 26 15:23:38 crc kubenswrapper[4823]: I0126 15:23:38.191666 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6929" event={"ID":"7489829f-2f35-44a5-8a21-452496a51db9","Type":"ContainerStarted","Data":"60417d6f045737efa5ac03c16c96f44b2b5e3ad36ab2393089211acfb3fc9039"} Jan 26 15:23:39 crc kubenswrapper[4823]: I0126 15:23:39.206036 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6929" event={"ID":"7489829f-2f35-44a5-8a21-452496a51db9","Type":"ContainerStarted","Data":"da9d161acec2c54525fe9341efd7353c381e3a1c0c2cd84ae9c0830947b34571"} Jan 26 15:23:40 crc kubenswrapper[4823]: I0126 15:23:40.219802 4823 generic.go:334] "Generic (PLEG): container finished" podID="7489829f-2f35-44a5-8a21-452496a51db9" containerID="da9d161acec2c54525fe9341efd7353c381e3a1c0c2cd84ae9c0830947b34571" exitCode=0 Jan 26 15:23:40 crc kubenswrapper[4823]: I0126 15:23:40.220851 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6929" event={"ID":"7489829f-2f35-44a5-8a21-452496a51db9","Type":"ContainerDied","Data":"da9d161acec2c54525fe9341efd7353c381e3a1c0c2cd84ae9c0830947b34571"} Jan 26 15:23:41 crc kubenswrapper[4823]: I0126 15:23:41.231886 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6929" event={"ID":"7489829f-2f35-44a5-8a21-452496a51db9","Type":"ContainerStarted","Data":"f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172"} Jan 26 15:23:41 crc kubenswrapper[4823]: I0126 15:23:41.254825 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x6929" podStartSLOduration=2.592911902 podStartE2EDuration="5.25479316s" podCreationTimestamp="2026-01-26 15:23:36 +0000 UTC" firstStartedPulling="2026-01-26 15:23:38.192857046 +0000 UTC m=+2214.878320191" lastFinishedPulling="2026-01-26 15:23:40.854738314 +0000 UTC m=+2217.540201449" observedRunningTime="2026-01-26 15:23:41.25406342 +0000 UTC m=+2217.939526525" watchObservedRunningTime="2026-01-26 15:23:41.25479316 +0000 UTC m=+2217.940256275" Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.306817 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.315651 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.324078 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7fjhf"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.332083 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.341802 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.349418 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.357012 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.366522 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-4lssx"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.374901 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-qjqr4"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.382501 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bjs8x"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.389649 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.396464 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-pz9sf"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.402694 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-wzm75"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.408583 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-6856k"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.414411 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.421105 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4t8g9"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.427164 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.434283 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-x4msg"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.441196 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-4t8g9"] Jan 26 15:23:44 crc kubenswrapper[4823]: I0126 15:23:44.447131 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-dgrg6"] Jan 26 15:23:45 crc kubenswrapper[4823]: I0126 15:23:45.573972 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b81a5da-2c44-44de-a3b3-a6ea31c16692" path="/var/lib/kubelet/pods/2b81a5da-2c44-44de-a3b3-a6ea31c16692/volumes" Jan 26 15:23:45 crc kubenswrapper[4823]: I0126 15:23:45.576795 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360" path="/var/lib/kubelet/pods/49f37d48-c6d4-4d0b-b1c8-4a6f5aaa2360/volumes" Jan 26 15:23:45 crc kubenswrapper[4823]: I0126 15:23:45.577488 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="632b9de7-75fd-44b2-94fb-faf1a6f005ef" path="/var/lib/kubelet/pods/632b9de7-75fd-44b2-94fb-faf1a6f005ef/volumes" Jan 26 15:23:45 crc kubenswrapper[4823]: I0126 15:23:45.578152 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="635f75a1-b5f4-46ac-88ac-dfdd03b5cec5" path="/var/lib/kubelet/pods/635f75a1-b5f4-46ac-88ac-dfdd03b5cec5/volumes" Jan 26 15:23:45 crc kubenswrapper[4823]: I0126 15:23:45.578864 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c2cec8-efd8-4432-8c31-bd77a00d4792" path="/var/lib/kubelet/pods/69c2cec8-efd8-4432-8c31-bd77a00d4792/volumes" Jan 26 15:23:45 crc kubenswrapper[4823]: I0126 15:23:45.580145 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86108dca-c7b6-4737-83b2-6b665cfdd9b4" path="/var/lib/kubelet/pods/86108dca-c7b6-4737-83b2-6b665cfdd9b4/volumes" Jan 26 15:23:45 crc kubenswrapper[4823]: I0126 15:23:45.580707 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9284efb6-1f6a-4eaf-9ec1-f8263d674ceb" path="/var/lib/kubelet/pods/9284efb6-1f6a-4eaf-9ec1-f8263d674ceb/volumes" Jan 26 15:23:45 crc kubenswrapper[4823]: I0126 15:23:45.581232 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4a3642c-422b-460f-9554-6bcaeb591ea2" path="/var/lib/kubelet/pods/c4a3642c-422b-460f-9554-6bcaeb591ea2/volumes" Jan 26 15:23:45 crc kubenswrapper[4823]: I0126 15:23:45.581742 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c59678d3-cd2b-493b-9cac-3e7543982453" path="/var/lib/kubelet/pods/c59678d3-cd2b-493b-9cac-3e7543982453/volumes" Jan 26 15:23:45 crc kubenswrapper[4823]: I0126 15:23:45.582798 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c0ecde-3daa-4c62-be28-4cb76ac205e0" path="/var/lib/kubelet/pods/c5c0ecde-3daa-4c62-be28-4cb76ac205e0/volumes" Jan 26 15:23:47 crc kubenswrapper[4823]: I0126 15:23:47.256740 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:47 crc kubenswrapper[4823]: I0126 15:23:47.256835 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:47 crc kubenswrapper[4823]: I0126 15:23:47.332030 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:47 crc kubenswrapper[4823]: I0126 15:23:47.391153 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:47 crc kubenswrapper[4823]: I0126 15:23:47.586859 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x6929"] Jan 26 15:23:49 crc kubenswrapper[4823]: I0126 15:23:49.321218 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x6929" podUID="7489829f-2f35-44a5-8a21-452496a51db9" containerName="registry-server" containerID="cri-o://f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172" gracePeriod=2 Jan 26 15:23:49 crc kubenswrapper[4823]: I0126 15:23:49.755627 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:49 crc kubenswrapper[4823]: I0126 15:23:49.849551 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkf4z\" (UniqueName: \"kubernetes.io/projected/7489829f-2f35-44a5-8a21-452496a51db9-kube-api-access-wkf4z\") pod \"7489829f-2f35-44a5-8a21-452496a51db9\" (UID: \"7489829f-2f35-44a5-8a21-452496a51db9\") " Jan 26 15:23:49 crc kubenswrapper[4823]: I0126 15:23:49.849707 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7489829f-2f35-44a5-8a21-452496a51db9-utilities\") pod \"7489829f-2f35-44a5-8a21-452496a51db9\" (UID: \"7489829f-2f35-44a5-8a21-452496a51db9\") " Jan 26 15:23:49 crc kubenswrapper[4823]: I0126 15:23:49.849909 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7489829f-2f35-44a5-8a21-452496a51db9-catalog-content\") pod \"7489829f-2f35-44a5-8a21-452496a51db9\" (UID: \"7489829f-2f35-44a5-8a21-452496a51db9\") " Jan 26 15:23:49 crc kubenswrapper[4823]: I0126 15:23:49.850591 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7489829f-2f35-44a5-8a21-452496a51db9-utilities" (OuterVolumeSpecName: "utilities") pod "7489829f-2f35-44a5-8a21-452496a51db9" (UID: "7489829f-2f35-44a5-8a21-452496a51db9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:23:49 crc kubenswrapper[4823]: I0126 15:23:49.856752 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7489829f-2f35-44a5-8a21-452496a51db9-kube-api-access-wkf4z" (OuterVolumeSpecName: "kube-api-access-wkf4z") pod "7489829f-2f35-44a5-8a21-452496a51db9" (UID: "7489829f-2f35-44a5-8a21-452496a51db9"). InnerVolumeSpecName "kube-api-access-wkf4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:23:49 crc kubenswrapper[4823]: I0126 15:23:49.909069 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7489829f-2f35-44a5-8a21-452496a51db9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7489829f-2f35-44a5-8a21-452496a51db9" (UID: "7489829f-2f35-44a5-8a21-452496a51db9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:23:49 crc kubenswrapper[4823]: I0126 15:23:49.952288 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkf4z\" (UniqueName: \"kubernetes.io/projected/7489829f-2f35-44a5-8a21-452496a51db9-kube-api-access-wkf4z\") on node \"crc\" DevicePath \"\"" Jan 26 15:23:49 crc kubenswrapper[4823]: I0126 15:23:49.952336 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7489829f-2f35-44a5-8a21-452496a51db9-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:23:49 crc kubenswrapper[4823]: I0126 15:23:49.952350 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7489829f-2f35-44a5-8a21-452496a51db9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.194119 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh"] Jan 26 15:23:50 crc kubenswrapper[4823]: E0126 15:23:50.194905 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7489829f-2f35-44a5-8a21-452496a51db9" containerName="registry-server" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.194936 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7489829f-2f35-44a5-8a21-452496a51db9" containerName="registry-server" Jan 26 15:23:50 crc kubenswrapper[4823]: E0126 15:23:50.194995 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7489829f-2f35-44a5-8a21-452496a51db9" containerName="extract-utilities" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.195010 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7489829f-2f35-44a5-8a21-452496a51db9" containerName="extract-utilities" Jan 26 15:23:50 crc kubenswrapper[4823]: E0126 15:23:50.195033 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7489829f-2f35-44a5-8a21-452496a51db9" containerName="extract-content" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.195044 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7489829f-2f35-44a5-8a21-452496a51db9" containerName="extract-content" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.195351 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7489829f-2f35-44a5-8a21-452496a51db9" containerName="registry-server" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.196348 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.201905 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.202072 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.202180 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.202336 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.203924 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.205352 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh"] Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.259919 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.260107 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr8s8\" (UniqueName: \"kubernetes.io/projected/8a90f744-fb78-46b3-9b5b-c83e711fafc5-kube-api-access-sr8s8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.260183 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.260323 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.260355 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.338715 4823 generic.go:334] "Generic (PLEG): container finished" podID="7489829f-2f35-44a5-8a21-452496a51db9" containerID="f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172" exitCode=0 Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.338765 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6929" event={"ID":"7489829f-2f35-44a5-8a21-452496a51db9","Type":"ContainerDied","Data":"f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172"} Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.338795 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x6929" event={"ID":"7489829f-2f35-44a5-8a21-452496a51db9","Type":"ContainerDied","Data":"60417d6f045737efa5ac03c16c96f44b2b5e3ad36ab2393089211acfb3fc9039"} Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.338814 4823 scope.go:117] "RemoveContainer" containerID="f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.338817 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x6929" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.365846 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.366445 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr8s8\" (UniqueName: \"kubernetes.io/projected/8a90f744-fb78-46b3-9b5b-c83e711fafc5-kube-api-access-sr8s8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.366500 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.366750 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.366784 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.372601 4823 scope.go:117] "RemoveContainer" containerID="da9d161acec2c54525fe9341efd7353c381e3a1c0c2cd84ae9c0830947b34571" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.374717 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x6929"] Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.375309 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.381270 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.382001 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x6929"] Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.382095 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.383324 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.396737 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr8s8\" (UniqueName: \"kubernetes.io/projected/8a90f744-fb78-46b3-9b5b-c83e711fafc5-kube-api-access-sr8s8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-krggh\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.444009 4823 scope.go:117] "RemoveContainer" containerID="35a07c243558362ab7549f3fa08d1773212c832ac02119d3fd069e4cd7cc10ad" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.491825 4823 scope.go:117] "RemoveContainer" containerID="f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172" Jan 26 15:23:50 crc kubenswrapper[4823]: E0126 15:23:50.492672 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172\": container with ID starting with f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172 not found: ID does not exist" containerID="f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.492711 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172"} err="failed to get container status \"f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172\": rpc error: code = NotFound desc = could not find container \"f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172\": container with ID starting with f708ba5364b2ccac2cfb72e6268c8f656c19cc36938ca384e7e35d7da9a65172 not found: ID does not exist" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.492735 4823 scope.go:117] "RemoveContainer" containerID="da9d161acec2c54525fe9341efd7353c381e3a1c0c2cd84ae9c0830947b34571" Jan 26 15:23:50 crc kubenswrapper[4823]: E0126 15:23:50.493049 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da9d161acec2c54525fe9341efd7353c381e3a1c0c2cd84ae9c0830947b34571\": container with ID starting with da9d161acec2c54525fe9341efd7353c381e3a1c0c2cd84ae9c0830947b34571 not found: ID does not exist" containerID="da9d161acec2c54525fe9341efd7353c381e3a1c0c2cd84ae9c0830947b34571" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.493074 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da9d161acec2c54525fe9341efd7353c381e3a1c0c2cd84ae9c0830947b34571"} err="failed to get container status \"da9d161acec2c54525fe9341efd7353c381e3a1c0c2cd84ae9c0830947b34571\": rpc error: code = NotFound desc = could not find container \"da9d161acec2c54525fe9341efd7353c381e3a1c0c2cd84ae9c0830947b34571\": container with ID starting with da9d161acec2c54525fe9341efd7353c381e3a1c0c2cd84ae9c0830947b34571 not found: ID does not exist" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.493093 4823 scope.go:117] "RemoveContainer" containerID="35a07c243558362ab7549f3fa08d1773212c832ac02119d3fd069e4cd7cc10ad" Jan 26 15:23:50 crc kubenswrapper[4823]: E0126 15:23:50.493931 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35a07c243558362ab7549f3fa08d1773212c832ac02119d3fd069e4cd7cc10ad\": container with ID starting with 35a07c243558362ab7549f3fa08d1773212c832ac02119d3fd069e4cd7cc10ad not found: ID does not exist" containerID="35a07c243558362ab7549f3fa08d1773212c832ac02119d3fd069e4cd7cc10ad" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.493959 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35a07c243558362ab7549f3fa08d1773212c832ac02119d3fd069e4cd7cc10ad"} err="failed to get container status \"35a07c243558362ab7549f3fa08d1773212c832ac02119d3fd069e4cd7cc10ad\": rpc error: code = NotFound desc = could not find container \"35a07c243558362ab7549f3fa08d1773212c832ac02119d3fd069e4cd7cc10ad\": container with ID starting with 35a07c243558362ab7549f3fa08d1773212c832ac02119d3fd069e4cd7cc10ad not found: ID does not exist" Jan 26 15:23:50 crc kubenswrapper[4823]: I0126 15:23:50.521528 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:23:51 crc kubenswrapper[4823]: I0126 15:23:51.120659 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh"] Jan 26 15:23:51 crc kubenswrapper[4823]: I0126 15:23:51.346832 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" event={"ID":"8a90f744-fb78-46b3-9b5b-c83e711fafc5","Type":"ContainerStarted","Data":"94379e7b186a02220e367aaa8858af51882247e005dfc26d14c644ea9092cd95"} Jan 26 15:23:51 crc kubenswrapper[4823]: I0126 15:23:51.575270 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7489829f-2f35-44a5-8a21-452496a51db9" path="/var/lib/kubelet/pods/7489829f-2f35-44a5-8a21-452496a51db9/volumes" Jan 26 15:23:52 crc kubenswrapper[4823]: I0126 15:23:52.359151 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" event={"ID":"8a90f744-fb78-46b3-9b5b-c83e711fafc5","Type":"ContainerStarted","Data":"5aabfa37135b4f2cddf89a68b32e22ab370f507d26a767c8cc44358f309af1b3"} Jan 26 15:23:52 crc kubenswrapper[4823]: I0126 15:23:52.387241 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" podStartSLOduration=1.502240096 podStartE2EDuration="2.387219043s" podCreationTimestamp="2026-01-26 15:23:50 +0000 UTC" firstStartedPulling="2026-01-26 15:23:51.119458689 +0000 UTC m=+2227.804921804" lastFinishedPulling="2026-01-26 15:23:52.004437646 +0000 UTC m=+2228.689900751" observedRunningTime="2026-01-26 15:23:52.378804494 +0000 UTC m=+2229.064267599" watchObservedRunningTime="2026-01-26 15:23:52.387219043 +0000 UTC m=+2229.072682168" Jan 26 15:24:06 crc kubenswrapper[4823]: I0126 15:24:06.023096 4823 scope.go:117] "RemoveContainer" containerID="0a0d7552456b5c37c4e529b68cb55d5080d7026208629a27d2524f32b1ba5dd1" Jan 26 15:24:06 crc kubenswrapper[4823]: I0126 15:24:06.073569 4823 scope.go:117] "RemoveContainer" containerID="d0c5d128c64e93bfe79ca8db80100ff913f598c449ffcf056f8a571f5971fe1e" Jan 26 15:24:06 crc kubenswrapper[4823]: I0126 15:24:06.123106 4823 scope.go:117] "RemoveContainer" containerID="93c40345a4c1e7559c83259c13a88a8671638bdc4704a9e3b1581d975c639d73" Jan 26 15:24:06 crc kubenswrapper[4823]: I0126 15:24:06.180235 4823 scope.go:117] "RemoveContainer" containerID="d3929e4a14c21df8fee731e99ffe9ccf66d32cf925928eccf332628a03810fd8" Jan 26 15:24:06 crc kubenswrapper[4823]: I0126 15:24:06.243570 4823 scope.go:117] "RemoveContainer" containerID="8528170ea1d57250f9836b2b96ae6b103b69c52b973fd905e384d830ab500229" Jan 26 15:24:06 crc kubenswrapper[4823]: I0126 15:24:06.302602 4823 scope.go:117] "RemoveContainer" containerID="4538880b5a3252353b31f9baa136b6932381849f7becf322e2b6315f4ecc54c1" Jan 26 15:24:06 crc kubenswrapper[4823]: I0126 15:24:06.361032 4823 scope.go:117] "RemoveContainer" containerID="4d27f7224de11fa16066738cc75486e47b5db147c51097501dda6dfcffb79067" Jan 26 15:24:06 crc kubenswrapper[4823]: I0126 15:24:06.605387 4823 generic.go:334] "Generic (PLEG): container finished" podID="8a90f744-fb78-46b3-9b5b-c83e711fafc5" containerID="5aabfa37135b4f2cddf89a68b32e22ab370f507d26a767c8cc44358f309af1b3" exitCode=0 Jan 26 15:24:06 crc kubenswrapper[4823]: I0126 15:24:06.605483 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" event={"ID":"8a90f744-fb78-46b3-9b5b-c83e711fafc5","Type":"ContainerDied","Data":"5aabfa37135b4f2cddf89a68b32e22ab370f507d26a767c8cc44358f309af1b3"} Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.088430 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.259231 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-ceph\") pod \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.259380 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-repo-setup-combined-ca-bundle\") pod \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.259456 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr8s8\" (UniqueName: \"kubernetes.io/projected/8a90f744-fb78-46b3-9b5b-c83e711fafc5-kube-api-access-sr8s8\") pod \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.259531 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-ssh-key-openstack-edpm-ipam\") pod \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.259815 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-inventory\") pod \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\" (UID: \"8a90f744-fb78-46b3-9b5b-c83e711fafc5\") " Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.267203 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-ceph" (OuterVolumeSpecName: "ceph") pod "8a90f744-fb78-46b3-9b5b-c83e711fafc5" (UID: "8a90f744-fb78-46b3-9b5b-c83e711fafc5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.267656 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a90f744-fb78-46b3-9b5b-c83e711fafc5-kube-api-access-sr8s8" (OuterVolumeSpecName: "kube-api-access-sr8s8") pod "8a90f744-fb78-46b3-9b5b-c83e711fafc5" (UID: "8a90f744-fb78-46b3-9b5b-c83e711fafc5"). InnerVolumeSpecName "kube-api-access-sr8s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.269704 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "8a90f744-fb78-46b3-9b5b-c83e711fafc5" (UID: "8a90f744-fb78-46b3-9b5b-c83e711fafc5"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.288019 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-inventory" (OuterVolumeSpecName: "inventory") pod "8a90f744-fb78-46b3-9b5b-c83e711fafc5" (UID: "8a90f744-fb78-46b3-9b5b-c83e711fafc5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.294982 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8a90f744-fb78-46b3-9b5b-c83e711fafc5" (UID: "8a90f744-fb78-46b3-9b5b-c83e711fafc5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.363814 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.364308 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.364330 4823 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.364348 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr8s8\" (UniqueName: \"kubernetes.io/projected/8a90f744-fb78-46b3-9b5b-c83e711fafc5-kube-api-access-sr8s8\") on node \"crc\" DevicePath \"\"" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.364399 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8a90f744-fb78-46b3-9b5b-c83e711fafc5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.630939 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" event={"ID":"8a90f744-fb78-46b3-9b5b-c83e711fafc5","Type":"ContainerDied","Data":"94379e7b186a02220e367aaa8858af51882247e005dfc26d14c644ea9092cd95"} Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.631002 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94379e7b186a02220e367aaa8858af51882247e005dfc26d14c644ea9092cd95" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.631024 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-krggh" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.767206 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg"] Jan 26 15:24:08 crc kubenswrapper[4823]: E0126 15:24:08.767848 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a90f744-fb78-46b3-9b5b-c83e711fafc5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.767881 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a90f744-fb78-46b3-9b5b-c83e711fafc5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.768137 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a90f744-fb78-46b3-9b5b-c83e711fafc5" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.769041 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.771467 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.772073 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.772334 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.772688 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.772834 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.776454 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg"] Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.882985 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbscd\" (UniqueName: \"kubernetes.io/projected/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-kube-api-access-nbscd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.883236 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.883322 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.883358 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.883584 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.985801 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.985880 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.985966 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbscd\" (UniqueName: \"kubernetes.io/projected/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-kube-api-access-nbscd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.986023 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.986085 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.991905 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.992168 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.992529 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:08 crc kubenswrapper[4823]: I0126 15:24:08.992813 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:09 crc kubenswrapper[4823]: I0126 15:24:09.020027 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbscd\" (UniqueName: \"kubernetes.io/projected/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-kube-api-access-nbscd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:09 crc kubenswrapper[4823]: I0126 15:24:09.126761 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:24:09 crc kubenswrapper[4823]: I0126 15:24:09.715597 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg"] Jan 26 15:24:10 crc kubenswrapper[4823]: I0126 15:24:10.653713 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" event={"ID":"18dfc993-b32b-4eae-9258-b6ac5a48e3ba","Type":"ContainerStarted","Data":"fd617a44078f2dbfda2f2a01ae62d681e228ebdec5c85bfe47ed64379b06a166"} Jan 26 15:24:10 crc kubenswrapper[4823]: I0126 15:24:10.654596 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" event={"ID":"18dfc993-b32b-4eae-9258-b6ac5a48e3ba","Type":"ContainerStarted","Data":"a188706e33d88b5c002fd9f4e4abccd54259dbce1972a2cc2550545c7bd249fd"} Jan 26 15:24:10 crc kubenswrapper[4823]: I0126 15:24:10.686179 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" podStartSLOduration=2.1523755270000002 podStartE2EDuration="2.686142974s" podCreationTimestamp="2026-01-26 15:24:08 +0000 UTC" firstStartedPulling="2026-01-26 15:24:09.712942802 +0000 UTC m=+2246.398405907" lastFinishedPulling="2026-01-26 15:24:10.246710249 +0000 UTC m=+2246.932173354" observedRunningTime="2026-01-26 15:24:10.67749868 +0000 UTC m=+2247.362961795" watchObservedRunningTime="2026-01-26 15:24:10.686142974 +0000 UTC m=+2247.371606079" Jan 26 15:24:34 crc kubenswrapper[4823]: I0126 15:24:34.508121 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:24:34 crc kubenswrapper[4823]: I0126 15:24:34.510777 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:25:04 crc kubenswrapper[4823]: I0126 15:25:04.508565 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:25:04 crc kubenswrapper[4823]: I0126 15:25:04.509191 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:25:06 crc kubenswrapper[4823]: I0126 15:25:06.596667 4823 scope.go:117] "RemoveContainer" containerID="2473781a4b1eb5b632977dda56370a8a67ec7a7391a5b9170cae4deb3016cb66" Jan 26 15:25:06 crc kubenswrapper[4823]: I0126 15:25:06.644427 4823 scope.go:117] "RemoveContainer" containerID="f6ca9e6c6b19320d47667cea908e90f57a0c6d2eb2b907478308b067ac5018ae" Jan 26 15:25:06 crc kubenswrapper[4823]: I0126 15:25:06.697016 4823 scope.go:117] "RemoveContainer" containerID="c067516d502d4390ca5f82df69a03afa917f612d4d2d8fb0ee0a3c19c64e2df0" Jan 26 15:25:34 crc kubenswrapper[4823]: I0126 15:25:34.508834 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:25:34 crc kubenswrapper[4823]: I0126 15:25:34.509447 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:25:34 crc kubenswrapper[4823]: I0126 15:25:34.509510 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 15:25:34 crc kubenswrapper[4823]: I0126 15:25:34.510269 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:25:34 crc kubenswrapper[4823]: I0126 15:25:34.510334 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" gracePeriod=600 Jan 26 15:25:34 crc kubenswrapper[4823]: E0126 15:25:34.632527 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:25:35 crc kubenswrapper[4823]: I0126 15:25:35.442299 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" exitCode=0 Jan 26 15:25:35 crc kubenswrapper[4823]: I0126 15:25:35.442358 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172"} Jan 26 15:25:35 crc kubenswrapper[4823]: I0126 15:25:35.442425 4823 scope.go:117] "RemoveContainer" containerID="961c11d3241ec3d801f95f0e8e42a013e9b9699a65807be36395bc3fcc849454" Jan 26 15:25:35 crc kubenswrapper[4823]: I0126 15:25:35.443988 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:25:35 crc kubenswrapper[4823]: E0126 15:25:35.444659 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:25:48 crc kubenswrapper[4823]: I0126 15:25:48.561670 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:25:48 crc kubenswrapper[4823]: E0126 15:25:48.563450 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:25:59 crc kubenswrapper[4823]: I0126 15:25:59.623819 4823 generic.go:334] "Generic (PLEG): container finished" podID="18dfc993-b32b-4eae-9258-b6ac5a48e3ba" containerID="fd617a44078f2dbfda2f2a01ae62d681e228ebdec5c85bfe47ed64379b06a166" exitCode=0 Jan 26 15:25:59 crc kubenswrapper[4823]: I0126 15:25:59.623938 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" event={"ID":"18dfc993-b32b-4eae-9258-b6ac5a48e3ba","Type":"ContainerDied","Data":"fd617a44078f2dbfda2f2a01ae62d681e228ebdec5c85bfe47ed64379b06a166"} Jan 26 15:26:00 crc kubenswrapper[4823]: I0126 15:26:00.560736 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:26:00 crc kubenswrapper[4823]: E0126 15:26:00.561001 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.152017 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.244029 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbscd\" (UniqueName: \"kubernetes.io/projected/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-kube-api-access-nbscd\") pod \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.244153 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-ssh-key-openstack-edpm-ipam\") pod \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.244207 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-inventory\") pod \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.244332 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-bootstrap-combined-ca-bundle\") pod \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.244443 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-ceph\") pod \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\" (UID: \"18dfc993-b32b-4eae-9258-b6ac5a48e3ba\") " Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.253212 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-kube-api-access-nbscd" (OuterVolumeSpecName: "kube-api-access-nbscd") pod "18dfc993-b32b-4eae-9258-b6ac5a48e3ba" (UID: "18dfc993-b32b-4eae-9258-b6ac5a48e3ba"). InnerVolumeSpecName "kube-api-access-nbscd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.254030 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-ceph" (OuterVolumeSpecName: "ceph") pod "18dfc993-b32b-4eae-9258-b6ac5a48e3ba" (UID: "18dfc993-b32b-4eae-9258-b6ac5a48e3ba"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.265705 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "18dfc993-b32b-4eae-9258-b6ac5a48e3ba" (UID: "18dfc993-b32b-4eae-9258-b6ac5a48e3ba"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.278237 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "18dfc993-b32b-4eae-9258-b6ac5a48e3ba" (UID: "18dfc993-b32b-4eae-9258-b6ac5a48e3ba"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.291900 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-inventory" (OuterVolumeSpecName: "inventory") pod "18dfc993-b32b-4eae-9258-b6ac5a48e3ba" (UID: "18dfc993-b32b-4eae-9258-b6ac5a48e3ba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.350557 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.350598 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.350609 4823 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.350617 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.350628 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nbscd\" (UniqueName: \"kubernetes.io/projected/18dfc993-b32b-4eae-9258-b6ac5a48e3ba-kube-api-access-nbscd\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.641796 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" event={"ID":"18dfc993-b32b-4eae-9258-b6ac5a48e3ba","Type":"ContainerDied","Data":"a188706e33d88b5c002fd9f4e4abccd54259dbce1972a2cc2550545c7bd249fd"} Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.641837 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a188706e33d88b5c002fd9f4e4abccd54259dbce1972a2cc2550545c7bd249fd" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.641891 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.738983 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5"] Jan 26 15:26:01 crc kubenswrapper[4823]: E0126 15:26:01.739339 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18dfc993-b32b-4eae-9258-b6ac5a48e3ba" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.739357 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="18dfc993-b32b-4eae-9258-b6ac5a48e3ba" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.739580 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="18dfc993-b32b-4eae-9258-b6ac5a48e3ba" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.740391 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.742444 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.742468 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.742564 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.742668 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.743609 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.753223 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5"] Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.861178 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.861554 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.861888 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.861936 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8bzg\" (UniqueName: \"kubernetes.io/projected/cf0b58f0-fc03-49a7-8795-112628f1e6e1-kube-api-access-n8bzg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.963426 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.963538 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.963615 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.963638 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8bzg\" (UniqueName: \"kubernetes.io/projected/cf0b58f0-fc03-49a7-8795-112628f1e6e1-kube-api-access-n8bzg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.967974 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.968010 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.973181 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:01 crc kubenswrapper[4823]: I0126 15:26:01.978886 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8bzg\" (UniqueName: \"kubernetes.io/projected/cf0b58f0-fc03-49a7-8795-112628f1e6e1-kube-api-access-n8bzg\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:02 crc kubenswrapper[4823]: I0126 15:26:02.060018 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:02 crc kubenswrapper[4823]: I0126 15:26:02.547385 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5"] Jan 26 15:26:02 crc kubenswrapper[4823]: I0126 15:26:02.557250 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:26:02 crc kubenswrapper[4823]: I0126 15:26:02.650127 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" event={"ID":"cf0b58f0-fc03-49a7-8795-112628f1e6e1","Type":"ContainerStarted","Data":"9a505f0437526dcddb50f661e59592343b53b024410f76a0ce74d8fe20eedb96"} Jan 26 15:26:03 crc kubenswrapper[4823]: I0126 15:26:03.660268 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" event={"ID":"cf0b58f0-fc03-49a7-8795-112628f1e6e1","Type":"ContainerStarted","Data":"edc1bf40034d667de768651ddf0870fde2d1edf2a8aa68dbf08b4ca25b3460a2"} Jan 26 15:26:14 crc kubenswrapper[4823]: I0126 15:26:14.561011 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:26:14 crc kubenswrapper[4823]: E0126 15:26:14.562457 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:26:26 crc kubenswrapper[4823]: I0126 15:26:26.560055 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:26:26 crc kubenswrapper[4823]: E0126 15:26:26.560825 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:26:28 crc kubenswrapper[4823]: I0126 15:26:28.866001 4823 generic.go:334] "Generic (PLEG): container finished" podID="cf0b58f0-fc03-49a7-8795-112628f1e6e1" containerID="edc1bf40034d667de768651ddf0870fde2d1edf2a8aa68dbf08b4ca25b3460a2" exitCode=0 Jan 26 15:26:28 crc kubenswrapper[4823]: I0126 15:26:28.866116 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" event={"ID":"cf0b58f0-fc03-49a7-8795-112628f1e6e1","Type":"ContainerDied","Data":"edc1bf40034d667de768651ddf0870fde2d1edf2a8aa68dbf08b4ca25b3460a2"} Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.234059 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.334835 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-inventory\") pod \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.334881 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-ceph\") pod \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.335018 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8bzg\" (UniqueName: \"kubernetes.io/projected/cf0b58f0-fc03-49a7-8795-112628f1e6e1-kube-api-access-n8bzg\") pod \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.335127 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-ssh-key-openstack-edpm-ipam\") pod \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\" (UID: \"cf0b58f0-fc03-49a7-8795-112628f1e6e1\") " Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.341458 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf0b58f0-fc03-49a7-8795-112628f1e6e1-kube-api-access-n8bzg" (OuterVolumeSpecName: "kube-api-access-n8bzg") pod "cf0b58f0-fc03-49a7-8795-112628f1e6e1" (UID: "cf0b58f0-fc03-49a7-8795-112628f1e6e1"). InnerVolumeSpecName "kube-api-access-n8bzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.342131 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-ceph" (OuterVolumeSpecName: "ceph") pod "cf0b58f0-fc03-49a7-8795-112628f1e6e1" (UID: "cf0b58f0-fc03-49a7-8795-112628f1e6e1"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.365006 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "cf0b58f0-fc03-49a7-8795-112628f1e6e1" (UID: "cf0b58f0-fc03-49a7-8795-112628f1e6e1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.369421 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-inventory" (OuterVolumeSpecName: "inventory") pod "cf0b58f0-fc03-49a7-8795-112628f1e6e1" (UID: "cf0b58f0-fc03-49a7-8795-112628f1e6e1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.436692 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8bzg\" (UniqueName: \"kubernetes.io/projected/cf0b58f0-fc03-49a7-8795-112628f1e6e1-kube-api-access-n8bzg\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.436731 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.436743 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.436751 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cf0b58f0-fc03-49a7-8795-112628f1e6e1-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.882652 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" event={"ID":"cf0b58f0-fc03-49a7-8795-112628f1e6e1","Type":"ContainerDied","Data":"9a505f0437526dcddb50f661e59592343b53b024410f76a0ce74d8fe20eedb96"} Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.882704 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a505f0437526dcddb50f661e59592343b53b024410f76a0ce74d8fe20eedb96" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.882764 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.961315 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb"] Jan 26 15:26:30 crc kubenswrapper[4823]: E0126 15:26:30.961721 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf0b58f0-fc03-49a7-8795-112628f1e6e1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.961738 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf0b58f0-fc03-49a7-8795-112628f1e6e1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.961902 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf0b58f0-fc03-49a7-8795-112628f1e6e1" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.962497 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.964994 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.965289 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.965569 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.971894 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.972077 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:26:30 crc kubenswrapper[4823]: I0126 15:26:30.976381 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb"] Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.046927 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq24n\" (UniqueName: \"kubernetes.io/projected/9a238cba-38fe-45bd-b0f4-aca93eb1484b-kube-api-access-jq24n\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.047015 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.047062 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.047104 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.149925 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.150109 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.150730 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq24n\" (UniqueName: \"kubernetes.io/projected/9a238cba-38fe-45bd-b0f4-aca93eb1484b-kube-api-access-jq24n\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.150926 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.154663 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.155791 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.156768 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.168103 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq24n\" (UniqueName: \"kubernetes.io/projected/9a238cba-38fe-45bd-b0f4-aca93eb1484b-kube-api-access-jq24n\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.288236 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.782352 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb"] Jan 26 15:26:31 crc kubenswrapper[4823]: I0126 15:26:31.910492 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" event={"ID":"9a238cba-38fe-45bd-b0f4-aca93eb1484b","Type":"ContainerStarted","Data":"81ae4970cc6d4043ad5b04ecc0a0e10db0690acbb8ca9d6441c92bce2c23b750"} Jan 26 15:26:32 crc kubenswrapper[4823]: I0126 15:26:32.918883 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" event={"ID":"9a238cba-38fe-45bd-b0f4-aca93eb1484b","Type":"ContainerStarted","Data":"f782b97ee234e6198f9b9f3e620024fa5a8bf229b78debb27ea91c850e1621a6"} Jan 26 15:26:32 crc kubenswrapper[4823]: I0126 15:26:32.941133 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" podStartSLOduration=2.490519822 podStartE2EDuration="2.941116171s" podCreationTimestamp="2026-01-26 15:26:30 +0000 UTC" firstStartedPulling="2026-01-26 15:26:31.784552078 +0000 UTC m=+2388.470015183" lastFinishedPulling="2026-01-26 15:26:32.235148427 +0000 UTC m=+2388.920611532" observedRunningTime="2026-01-26 15:26:32.935147089 +0000 UTC m=+2389.620610224" watchObservedRunningTime="2026-01-26 15:26:32.941116171 +0000 UTC m=+2389.626579276" Jan 26 15:26:37 crc kubenswrapper[4823]: I0126 15:26:37.961390 4823 generic.go:334] "Generic (PLEG): container finished" podID="9a238cba-38fe-45bd-b0f4-aca93eb1484b" containerID="f782b97ee234e6198f9b9f3e620024fa5a8bf229b78debb27ea91c850e1621a6" exitCode=0 Jan 26 15:26:37 crc kubenswrapper[4823]: I0126 15:26:37.961518 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" event={"ID":"9a238cba-38fe-45bd-b0f4-aca93eb1484b","Type":"ContainerDied","Data":"f782b97ee234e6198f9b9f3e620024fa5a8bf229b78debb27ea91c850e1621a6"} Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.345512 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.426146 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-ceph\") pod \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.426199 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jq24n\" (UniqueName: \"kubernetes.io/projected/9a238cba-38fe-45bd-b0f4-aca93eb1484b-kube-api-access-jq24n\") pod \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.426425 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-ssh-key-openstack-edpm-ipam\") pod \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.426533 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-inventory\") pod \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\" (UID: \"9a238cba-38fe-45bd-b0f4-aca93eb1484b\") " Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.433027 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a238cba-38fe-45bd-b0f4-aca93eb1484b-kube-api-access-jq24n" (OuterVolumeSpecName: "kube-api-access-jq24n") pod "9a238cba-38fe-45bd-b0f4-aca93eb1484b" (UID: "9a238cba-38fe-45bd-b0f4-aca93eb1484b"). InnerVolumeSpecName "kube-api-access-jq24n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.433482 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-ceph" (OuterVolumeSpecName: "ceph") pod "9a238cba-38fe-45bd-b0f4-aca93eb1484b" (UID: "9a238cba-38fe-45bd-b0f4-aca93eb1484b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.461307 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9a238cba-38fe-45bd-b0f4-aca93eb1484b" (UID: "9a238cba-38fe-45bd-b0f4-aca93eb1484b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.464123 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-inventory" (OuterVolumeSpecName: "inventory") pod "9a238cba-38fe-45bd-b0f4-aca93eb1484b" (UID: "9a238cba-38fe-45bd-b0f4-aca93eb1484b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.528891 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.528921 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jq24n\" (UniqueName: \"kubernetes.io/projected/9a238cba-38fe-45bd-b0f4-aca93eb1484b-kube-api-access-jq24n\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.528932 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.528942 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a238cba-38fe-45bd-b0f4-aca93eb1484b-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.560481 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:26:39 crc kubenswrapper[4823]: E0126 15:26:39.560996 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.979053 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" event={"ID":"9a238cba-38fe-45bd-b0f4-aca93eb1484b","Type":"ContainerDied","Data":"81ae4970cc6d4043ad5b04ecc0a0e10db0690acbb8ca9d6441c92bce2c23b750"} Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.979102 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb" Jan 26 15:26:39 crc kubenswrapper[4823]: I0126 15:26:39.979103 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81ae4970cc6d4043ad5b04ecc0a0e10db0690acbb8ca9d6441c92bce2c23b750" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.051050 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv"] Jan 26 15:26:40 crc kubenswrapper[4823]: E0126 15:26:40.051403 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a238cba-38fe-45bd-b0f4-aca93eb1484b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.051419 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a238cba-38fe-45bd-b0f4-aca93eb1484b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.051600 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a238cba-38fe-45bd-b0f4-aca93eb1484b" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.052168 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.056752 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.060633 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.062851 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.063734 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.066005 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.084383 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv"] Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.140814 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mj7bv\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.140911 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mj7bv\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.140985 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mj7bv\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.141019 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bgnd\" (UniqueName: \"kubernetes.io/projected/50405418-70aa-488f-b5e1-dc48b0888adf-kube-api-access-8bgnd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mj7bv\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.242691 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mj7bv\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.242769 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bgnd\" (UniqueName: \"kubernetes.io/projected/50405418-70aa-488f-b5e1-dc48b0888adf-kube-api-access-8bgnd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mj7bv\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.242909 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mj7bv\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.242977 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mj7bv\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.246499 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mj7bv\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.246797 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mj7bv\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.246986 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mj7bv\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.258044 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bgnd\" (UniqueName: \"kubernetes.io/projected/50405418-70aa-488f-b5e1-dc48b0888adf-kube-api-access-8bgnd\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-mj7bv\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.374061 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.887760 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv"] Jan 26 15:26:40 crc kubenswrapper[4823]: I0126 15:26:40.986115 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" event={"ID":"50405418-70aa-488f-b5e1-dc48b0888adf","Type":"ContainerStarted","Data":"c82e48403bd911117c6291628f12f02b856f84f833634833018513477baa65bb"} Jan 26 15:26:41 crc kubenswrapper[4823]: I0126 15:26:41.997200 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" event={"ID":"50405418-70aa-488f-b5e1-dc48b0888adf","Type":"ContainerStarted","Data":"6f9cbb6d6ae5c993768c1db1b5fe0cdc0d2c4f290f86eb059620ef46d2d0a2d0"} Jan 26 15:26:42 crc kubenswrapper[4823]: I0126 15:26:42.030899 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" podStartSLOduration=1.445350603 podStartE2EDuration="2.03086593s" podCreationTimestamp="2026-01-26 15:26:40 +0000 UTC" firstStartedPulling="2026-01-26 15:26:40.893761162 +0000 UTC m=+2397.579224267" lastFinishedPulling="2026-01-26 15:26:41.479276489 +0000 UTC m=+2398.164739594" observedRunningTime="2026-01-26 15:26:42.0145367 +0000 UTC m=+2398.699999835" watchObservedRunningTime="2026-01-26 15:26:42.03086593 +0000 UTC m=+2398.716329075" Jan 26 15:26:51 crc kubenswrapper[4823]: I0126 15:26:51.560458 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:26:51 crc kubenswrapper[4823]: E0126 15:26:51.561171 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:27:04 crc kubenswrapper[4823]: I0126 15:27:04.561487 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:27:04 crc kubenswrapper[4823]: E0126 15:27:04.562258 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:27:16 crc kubenswrapper[4823]: I0126 15:27:16.560920 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:27:16 crc kubenswrapper[4823]: E0126 15:27:16.562152 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:27:19 crc kubenswrapper[4823]: I0126 15:27:19.323336 4823 generic.go:334] "Generic (PLEG): container finished" podID="50405418-70aa-488f-b5e1-dc48b0888adf" containerID="6f9cbb6d6ae5c993768c1db1b5fe0cdc0d2c4f290f86eb059620ef46d2d0a2d0" exitCode=0 Jan 26 15:27:19 crc kubenswrapper[4823]: I0126 15:27:19.323770 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" event={"ID":"50405418-70aa-488f-b5e1-dc48b0888adf","Type":"ContainerDied","Data":"6f9cbb6d6ae5c993768c1db1b5fe0cdc0d2c4f290f86eb059620ef46d2d0a2d0"} Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.777818 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.826921 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-ceph\") pod \"50405418-70aa-488f-b5e1-dc48b0888adf\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.826990 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-inventory\") pod \"50405418-70aa-488f-b5e1-dc48b0888adf\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.827050 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bgnd\" (UniqueName: \"kubernetes.io/projected/50405418-70aa-488f-b5e1-dc48b0888adf-kube-api-access-8bgnd\") pod \"50405418-70aa-488f-b5e1-dc48b0888adf\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.827134 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-ssh-key-openstack-edpm-ipam\") pod \"50405418-70aa-488f-b5e1-dc48b0888adf\" (UID: \"50405418-70aa-488f-b5e1-dc48b0888adf\") " Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.832457 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-ceph" (OuterVolumeSpecName: "ceph") pod "50405418-70aa-488f-b5e1-dc48b0888adf" (UID: "50405418-70aa-488f-b5e1-dc48b0888adf"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.832928 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50405418-70aa-488f-b5e1-dc48b0888adf-kube-api-access-8bgnd" (OuterVolumeSpecName: "kube-api-access-8bgnd") pod "50405418-70aa-488f-b5e1-dc48b0888adf" (UID: "50405418-70aa-488f-b5e1-dc48b0888adf"). InnerVolumeSpecName "kube-api-access-8bgnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.856859 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-inventory" (OuterVolumeSpecName: "inventory") pod "50405418-70aa-488f-b5e1-dc48b0888adf" (UID: "50405418-70aa-488f-b5e1-dc48b0888adf"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.866442 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "50405418-70aa-488f-b5e1-dc48b0888adf" (UID: "50405418-70aa-488f-b5e1-dc48b0888adf"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.928858 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bgnd\" (UniqueName: \"kubernetes.io/projected/50405418-70aa-488f-b5e1-dc48b0888adf-kube-api-access-8bgnd\") on node \"crc\" DevicePath \"\"" Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.928909 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.928922 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:27:20 crc kubenswrapper[4823]: I0126 15:27:20.928935 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50405418-70aa-488f-b5e1-dc48b0888adf-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.349330 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" event={"ID":"50405418-70aa-488f-b5e1-dc48b0888adf","Type":"ContainerDied","Data":"c82e48403bd911117c6291628f12f02b856f84f833634833018513477baa65bb"} Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.349421 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c82e48403bd911117c6291628f12f02b856f84f833634833018513477baa65bb" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.349485 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-mj7bv" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.881766 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv"] Jan 26 15:27:21 crc kubenswrapper[4823]: E0126 15:27:21.882122 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50405418-70aa-488f-b5e1-dc48b0888adf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.882136 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="50405418-70aa-488f-b5e1-dc48b0888adf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.882288 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="50405418-70aa-488f-b5e1-dc48b0888adf" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.882880 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.888074 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.888566 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.888774 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.891888 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.897869 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.899567 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv"] Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.981141 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.981788 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.981956 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jf9t\" (UniqueName: \"kubernetes.io/projected/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-kube-api-access-7jf9t\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:21 crc kubenswrapper[4823]: I0126 15:27:21.982001 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:22 crc kubenswrapper[4823]: I0126 15:27:22.083923 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:22 crc kubenswrapper[4823]: I0126 15:27:22.084426 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:22 crc kubenswrapper[4823]: I0126 15:27:22.084627 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:22 crc kubenswrapper[4823]: I0126 15:27:22.084793 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jf9t\" (UniqueName: \"kubernetes.io/projected/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-kube-api-access-7jf9t\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:22 crc kubenswrapper[4823]: I0126 15:27:22.093526 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:22 crc kubenswrapper[4823]: I0126 15:27:22.096099 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:22 crc kubenswrapper[4823]: I0126 15:27:22.099185 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-ssh-key-openstack-edpm-ipam\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:22 crc kubenswrapper[4823]: I0126 15:27:22.112660 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jf9t\" (UniqueName: \"kubernetes.io/projected/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-kube-api-access-7jf9t\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:22 crc kubenswrapper[4823]: I0126 15:27:22.197648 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:22 crc kubenswrapper[4823]: I0126 15:27:22.757450 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv"] Jan 26 15:27:23 crc kubenswrapper[4823]: I0126 15:27:23.384048 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" event={"ID":"0f9c42b3-fbf9-4678-ab39-cf772f154f4b","Type":"ContainerStarted","Data":"cb8d6c8d34ffb02df2304c14dfa5cf1c1ae2b55974fca89a38f0df7557065a54"} Jan 26 15:27:24 crc kubenswrapper[4823]: I0126 15:27:24.397347 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" event={"ID":"0f9c42b3-fbf9-4678-ab39-cf772f154f4b","Type":"ContainerStarted","Data":"8a2ba1356b238ad8a0c2514ba1168ca07565a2fd015dd3e699cd02b0533d1917"} Jan 26 15:27:24 crc kubenswrapper[4823]: I0126 15:27:24.430609 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" podStartSLOduration=2.852069051 podStartE2EDuration="3.430589349s" podCreationTimestamp="2026-01-26 15:27:21 +0000 UTC" firstStartedPulling="2026-01-26 15:27:22.763836962 +0000 UTC m=+2439.449300067" lastFinishedPulling="2026-01-26 15:27:23.34235726 +0000 UTC m=+2440.027820365" observedRunningTime="2026-01-26 15:27:24.42209711 +0000 UTC m=+2441.107560215" watchObservedRunningTime="2026-01-26 15:27:24.430589349 +0000 UTC m=+2441.116052454" Jan 26 15:27:28 crc kubenswrapper[4823]: I0126 15:27:28.449536 4823 generic.go:334] "Generic (PLEG): container finished" podID="0f9c42b3-fbf9-4678-ab39-cf772f154f4b" containerID="8a2ba1356b238ad8a0c2514ba1168ca07565a2fd015dd3e699cd02b0533d1917" exitCode=0 Jan 26 15:27:28 crc kubenswrapper[4823]: I0126 15:27:28.449629 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" event={"ID":"0f9c42b3-fbf9-4678-ab39-cf772f154f4b","Type":"ContainerDied","Data":"8a2ba1356b238ad8a0c2514ba1168ca07565a2fd015dd3e699cd02b0533d1917"} Jan 26 15:27:28 crc kubenswrapper[4823]: I0126 15:27:28.564080 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:27:28 crc kubenswrapper[4823]: E0126 15:27:28.564734 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:27:29 crc kubenswrapper[4823]: I0126 15:27:29.947069 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.056173 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-ceph\") pod \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.056290 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jf9t\" (UniqueName: \"kubernetes.io/projected/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-kube-api-access-7jf9t\") pod \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.056335 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-ssh-key-openstack-edpm-ipam\") pod \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.056435 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-inventory\") pod \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\" (UID: \"0f9c42b3-fbf9-4678-ab39-cf772f154f4b\") " Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.062766 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-kube-api-access-7jf9t" (OuterVolumeSpecName: "kube-api-access-7jf9t") pod "0f9c42b3-fbf9-4678-ab39-cf772f154f4b" (UID: "0f9c42b3-fbf9-4678-ab39-cf772f154f4b"). InnerVolumeSpecName "kube-api-access-7jf9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.063397 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-ceph" (OuterVolumeSpecName: "ceph") pod "0f9c42b3-fbf9-4678-ab39-cf772f154f4b" (UID: "0f9c42b3-fbf9-4678-ab39-cf772f154f4b"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.086905 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0f9c42b3-fbf9-4678-ab39-cf772f154f4b" (UID: "0f9c42b3-fbf9-4678-ab39-cf772f154f4b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.099930 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-inventory" (OuterVolumeSpecName: "inventory") pod "0f9c42b3-fbf9-4678-ab39-cf772f154f4b" (UID: "0f9c42b3-fbf9-4678-ab39-cf772f154f4b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.160571 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.160632 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.160645 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.160655 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jf9t\" (UniqueName: \"kubernetes.io/projected/0f9c42b3-fbf9-4678-ab39-cf772f154f4b-kube-api-access-7jf9t\") on node \"crc\" DevicePath \"\"" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.473425 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" event={"ID":"0f9c42b3-fbf9-4678-ab39-cf772f154f4b","Type":"ContainerDied","Data":"cb8d6c8d34ffb02df2304c14dfa5cf1c1ae2b55974fca89a38f0df7557065a54"} Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.473464 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb8d6c8d34ffb02df2304c14dfa5cf1c1ae2b55974fca89a38f0df7557065a54" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.473544 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.560249 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq"] Jan 26 15:27:30 crc kubenswrapper[4823]: E0126 15:27:30.560639 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f9c42b3-fbf9-4678-ab39-cf772f154f4b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.560664 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f9c42b3-fbf9-4678-ab39-cf772f154f4b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.560831 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f9c42b3-fbf9-4678-ab39-cf772f154f4b" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.561479 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.564633 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.565135 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.565598 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.565764 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.565895 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.570603 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq"] Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.668607 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxz8\" (UniqueName: \"kubernetes.io/projected/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-kube-api-access-wjxz8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.668750 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.668795 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.668923 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.770271 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjxz8\" (UniqueName: \"kubernetes.io/projected/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-kube-api-access-wjxz8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.770395 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.770426 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.770520 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.775553 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.776043 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.776965 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.811398 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjxz8\" (UniqueName: \"kubernetes.io/projected/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-kube-api-access-wjxz8\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:30 crc kubenswrapper[4823]: I0126 15:27:30.880926 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:27:31 crc kubenswrapper[4823]: I0126 15:27:31.378856 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq"] Jan 26 15:27:31 crc kubenswrapper[4823]: I0126 15:27:31.485511 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" event={"ID":"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a","Type":"ContainerStarted","Data":"91d004efa666887cf7d9478f6377d7a5e5bc4c800ed1a9c27ce1accfacf17446"} Jan 26 15:27:32 crc kubenswrapper[4823]: I0126 15:27:32.495081 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" event={"ID":"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a","Type":"ContainerStarted","Data":"faa52f2eb5e8c36496e881f27d1d1775a44fb3f5b5d1ff20ce1bab1c02877f02"} Jan 26 15:27:32 crc kubenswrapper[4823]: I0126 15:27:32.520121 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" podStartSLOduration=2.010423536 podStartE2EDuration="2.520099666s" podCreationTimestamp="2026-01-26 15:27:30 +0000 UTC" firstStartedPulling="2026-01-26 15:27:31.384607442 +0000 UTC m=+2448.070070547" lastFinishedPulling="2026-01-26 15:27:31.894283572 +0000 UTC m=+2448.579746677" observedRunningTime="2026-01-26 15:27:32.513850637 +0000 UTC m=+2449.199313752" watchObservedRunningTime="2026-01-26 15:27:32.520099666 +0000 UTC m=+2449.205562781" Jan 26 15:27:41 crc kubenswrapper[4823]: I0126 15:27:41.560979 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:27:41 crc kubenswrapper[4823]: E0126 15:27:41.562351 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:27:56 crc kubenswrapper[4823]: I0126 15:27:56.562767 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:27:56 crc kubenswrapper[4823]: E0126 15:27:56.563981 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:28:10 crc kubenswrapper[4823]: I0126 15:28:10.560034 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:28:10 crc kubenswrapper[4823]: E0126 15:28:10.561124 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:28:17 crc kubenswrapper[4823]: I0126 15:28:17.982497 4823 generic.go:334] "Generic (PLEG): container finished" podID="3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a" containerID="faa52f2eb5e8c36496e881f27d1d1775a44fb3f5b5d1ff20ce1bab1c02877f02" exitCode=0 Jan 26 15:28:17 crc kubenswrapper[4823]: I0126 15:28:17.983512 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" event={"ID":"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a","Type":"ContainerDied","Data":"faa52f2eb5e8c36496e881f27d1d1775a44fb3f5b5d1ff20ce1bab1c02877f02"} Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.517465 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.558115 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-inventory\") pod \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.558641 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjxz8\" (UniqueName: \"kubernetes.io/projected/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-kube-api-access-wjxz8\") pod \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.558812 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-ssh-key-openstack-edpm-ipam\") pod \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.558872 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-ceph\") pod \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\" (UID: \"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a\") " Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.565387 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-kube-api-access-wjxz8" (OuterVolumeSpecName: "kube-api-access-wjxz8") pod "3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a" (UID: "3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a"). InnerVolumeSpecName "kube-api-access-wjxz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.566436 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-ceph" (OuterVolumeSpecName: "ceph") pod "3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a" (UID: "3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.586851 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-inventory" (OuterVolumeSpecName: "inventory") pod "3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a" (UID: "3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.592602 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a" (UID: "3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.661642 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.661688 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjxz8\" (UniqueName: \"kubernetes.io/projected/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-kube-api-access-wjxz8\") on node \"crc\" DevicePath \"\"" Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.661703 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:28:19 crc kubenswrapper[4823]: I0126 15:28:19.661715 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.008089 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" event={"ID":"3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a","Type":"ContainerDied","Data":"91d004efa666887cf7d9478f6377d7a5e5bc4c800ed1a9c27ce1accfacf17446"} Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.008167 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91d004efa666887cf7d9478f6377d7a5e5bc4c800ed1a9c27ce1accfacf17446" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.008233 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.124135 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-n542v"] Jan 26 15:28:20 crc kubenswrapper[4823]: E0126 15:28:20.124612 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.124638 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.124857 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.125571 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.140191 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.140234 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.140291 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.140349 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.140582 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.142908 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-n542v"] Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.172240 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-n542v\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.172501 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-ceph\") pod \"ssh-known-hosts-edpm-deployment-n542v\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.172723 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-n542v\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.173157 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69fh5\" (UniqueName: \"kubernetes.io/projected/9d8764bd-6f30-4b0f-9ada-a051069f288e-kube-api-access-69fh5\") pod \"ssh-known-hosts-edpm-deployment-n542v\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.274403 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69fh5\" (UniqueName: \"kubernetes.io/projected/9d8764bd-6f30-4b0f-9ada-a051069f288e-kube-api-access-69fh5\") pod \"ssh-known-hosts-edpm-deployment-n542v\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.274483 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-n542v\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.274536 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-ceph\") pod \"ssh-known-hosts-edpm-deployment-n542v\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.274592 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-n542v\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.280063 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-n542v\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.280159 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-ceph\") pod \"ssh-known-hosts-edpm-deployment-n542v\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.280863 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-n542v\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.294215 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69fh5\" (UniqueName: \"kubernetes.io/projected/9d8764bd-6f30-4b0f-9ada-a051069f288e-kube-api-access-69fh5\") pod \"ssh-known-hosts-edpm-deployment-n542v\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:20 crc kubenswrapper[4823]: I0126 15:28:20.457008 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:21 crc kubenswrapper[4823]: I0126 15:28:21.075238 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-n542v"] Jan 26 15:28:22 crc kubenswrapper[4823]: I0126 15:28:22.044544 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n542v" event={"ID":"9d8764bd-6f30-4b0f-9ada-a051069f288e","Type":"ContainerStarted","Data":"b0cc88f73dc2a61f71912c6116c92a8702e9c82ec78be79388fb817c8114003f"} Jan 26 15:28:22 crc kubenswrapper[4823]: I0126 15:28:22.046218 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n542v" event={"ID":"9d8764bd-6f30-4b0f-9ada-a051069f288e","Type":"ContainerStarted","Data":"2c6a6f0a4f58305698e378989e0677f2032d27a358c1eb519e12533027ecd3d7"} Jan 26 15:28:22 crc kubenswrapper[4823]: I0126 15:28:22.068104 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-n542v" podStartSLOduration=1.533962788 podStartE2EDuration="2.068088308s" podCreationTimestamp="2026-01-26 15:28:20 +0000 UTC" firstStartedPulling="2026-01-26 15:28:21.086058694 +0000 UTC m=+2497.771521799" lastFinishedPulling="2026-01-26 15:28:21.620184214 +0000 UTC m=+2498.305647319" observedRunningTime="2026-01-26 15:28:22.064646585 +0000 UTC m=+2498.750109700" watchObservedRunningTime="2026-01-26 15:28:22.068088308 +0000 UTC m=+2498.753551413" Jan 26 15:28:25 crc kubenswrapper[4823]: I0126 15:28:25.561095 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:28:25 crc kubenswrapper[4823]: E0126 15:28:25.561761 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:28:32 crc kubenswrapper[4823]: I0126 15:28:32.133808 4823 generic.go:334] "Generic (PLEG): container finished" podID="9d8764bd-6f30-4b0f-9ada-a051069f288e" containerID="b0cc88f73dc2a61f71912c6116c92a8702e9c82ec78be79388fb817c8114003f" exitCode=0 Jan 26 15:28:32 crc kubenswrapper[4823]: I0126 15:28:32.133897 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n542v" event={"ID":"9d8764bd-6f30-4b0f-9ada-a051069f288e","Type":"ContainerDied","Data":"b0cc88f73dc2a61f71912c6116c92a8702e9c82ec78be79388fb817c8114003f"} Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.598924 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.699294 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69fh5\" (UniqueName: \"kubernetes.io/projected/9d8764bd-6f30-4b0f-9ada-a051069f288e-kube-api-access-69fh5\") pod \"9d8764bd-6f30-4b0f-9ada-a051069f288e\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.699479 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-ceph\") pod \"9d8764bd-6f30-4b0f-9ada-a051069f288e\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.699649 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-inventory-0\") pod \"9d8764bd-6f30-4b0f-9ada-a051069f288e\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.699840 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-ssh-key-openstack-edpm-ipam\") pod \"9d8764bd-6f30-4b0f-9ada-a051069f288e\" (UID: \"9d8764bd-6f30-4b0f-9ada-a051069f288e\") " Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.706843 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d8764bd-6f30-4b0f-9ada-a051069f288e-kube-api-access-69fh5" (OuterVolumeSpecName: "kube-api-access-69fh5") pod "9d8764bd-6f30-4b0f-9ada-a051069f288e" (UID: "9d8764bd-6f30-4b0f-9ada-a051069f288e"). InnerVolumeSpecName "kube-api-access-69fh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.707230 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-ceph" (OuterVolumeSpecName: "ceph") pod "9d8764bd-6f30-4b0f-9ada-a051069f288e" (UID: "9d8764bd-6f30-4b0f-9ada-a051069f288e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.730253 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9d8764bd-6f30-4b0f-9ada-a051069f288e" (UID: "9d8764bd-6f30-4b0f-9ada-a051069f288e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.730739 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9d8764bd-6f30-4b0f-9ada-a051069f288e" (UID: "9d8764bd-6f30-4b0f-9ada-a051069f288e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.803893 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.803944 4823 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.803960 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9d8764bd-6f30-4b0f-9ada-a051069f288e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:28:33 crc kubenswrapper[4823]: I0126 15:28:33.803975 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69fh5\" (UniqueName: \"kubernetes.io/projected/9d8764bd-6f30-4b0f-9ada-a051069f288e-kube-api-access-69fh5\") on node \"crc\" DevicePath \"\"" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.168079 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-n542v" event={"ID":"9d8764bd-6f30-4b0f-9ada-a051069f288e","Type":"ContainerDied","Data":"2c6a6f0a4f58305698e378989e0677f2032d27a358c1eb519e12533027ecd3d7"} Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.168523 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c6a6f0a4f58305698e378989e0677f2032d27a358c1eb519e12533027ecd3d7" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.168458 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-n542v" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.256910 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd"] Jan 26 15:28:34 crc kubenswrapper[4823]: E0126 15:28:34.257470 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d8764bd-6f30-4b0f-9ada-a051069f288e" containerName="ssh-known-hosts-edpm-deployment" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.257495 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d8764bd-6f30-4b0f-9ada-a051069f288e" containerName="ssh-known-hosts-edpm-deployment" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.257695 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d8764bd-6f30-4b0f-9ada-a051069f288e" containerName="ssh-known-hosts-edpm-deployment" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.258606 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.261882 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.262497 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.262791 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.262876 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.263908 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.266345 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd"] Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.423258 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j6rkd\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.423331 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j6rkd\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.423536 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trdzl\" (UniqueName: \"kubernetes.io/projected/fd61fd12-7479-477c-8139-de16026c8868-kube-api-access-trdzl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j6rkd\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.423866 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j6rkd\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.525751 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j6rkd\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.525816 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j6rkd\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.525917 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trdzl\" (UniqueName: \"kubernetes.io/projected/fd61fd12-7479-477c-8139-de16026c8868-kube-api-access-trdzl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j6rkd\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.526013 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j6rkd\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.530613 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j6rkd\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.530867 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j6rkd\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.536308 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j6rkd\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.549253 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trdzl\" (UniqueName: \"kubernetes.io/projected/fd61fd12-7479-477c-8139-de16026c8868-kube-api-access-trdzl\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-j6rkd\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:34 crc kubenswrapper[4823]: I0126 15:28:34.576258 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:35 crc kubenswrapper[4823]: I0126 15:28:35.144588 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd"] Jan 26 15:28:35 crc kubenswrapper[4823]: I0126 15:28:35.176206 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" event={"ID":"fd61fd12-7479-477c-8139-de16026c8868","Type":"ContainerStarted","Data":"66684758f8ba7bba1c55267b6be8bc66b29897eb95af4a753ce6437ccf23fcb8"} Jan 26 15:28:36 crc kubenswrapper[4823]: I0126 15:28:36.202488 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" event={"ID":"fd61fd12-7479-477c-8139-de16026c8868","Type":"ContainerStarted","Data":"5781b4b1ab4670a3742be04d1bf37ee3ebae02dba9b5d4628479a66dd8a3d69c"} Jan 26 15:28:36 crc kubenswrapper[4823]: I0126 15:28:36.230447 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" podStartSLOduration=1.775757036 podStartE2EDuration="2.230423062s" podCreationTimestamp="2026-01-26 15:28:34 +0000 UTC" firstStartedPulling="2026-01-26 15:28:35.152671116 +0000 UTC m=+2511.838134221" lastFinishedPulling="2026-01-26 15:28:35.607337142 +0000 UTC m=+2512.292800247" observedRunningTime="2026-01-26 15:28:36.229926758 +0000 UTC m=+2512.915389863" watchObservedRunningTime="2026-01-26 15:28:36.230423062 +0000 UTC m=+2512.915886167" Jan 26 15:28:38 crc kubenswrapper[4823]: I0126 15:28:38.560928 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:28:38 crc kubenswrapper[4823]: E0126 15:28:38.561833 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:28:44 crc kubenswrapper[4823]: I0126 15:28:44.287175 4823 generic.go:334] "Generic (PLEG): container finished" podID="fd61fd12-7479-477c-8139-de16026c8868" containerID="5781b4b1ab4670a3742be04d1bf37ee3ebae02dba9b5d4628479a66dd8a3d69c" exitCode=0 Jan 26 15:28:44 crc kubenswrapper[4823]: I0126 15:28:44.287637 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" event={"ID":"fd61fd12-7479-477c-8139-de16026c8868","Type":"ContainerDied","Data":"5781b4b1ab4670a3742be04d1bf37ee3ebae02dba9b5d4628479a66dd8a3d69c"} Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.682041 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.768623 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trdzl\" (UniqueName: \"kubernetes.io/projected/fd61fd12-7479-477c-8139-de16026c8868-kube-api-access-trdzl\") pod \"fd61fd12-7479-477c-8139-de16026c8868\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.768784 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-inventory\") pod \"fd61fd12-7479-477c-8139-de16026c8868\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.769025 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-ceph\") pod \"fd61fd12-7479-477c-8139-de16026c8868\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.769115 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-ssh-key-openstack-edpm-ipam\") pod \"fd61fd12-7479-477c-8139-de16026c8868\" (UID: \"fd61fd12-7479-477c-8139-de16026c8868\") " Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.776575 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-ceph" (OuterVolumeSpecName: "ceph") pod "fd61fd12-7479-477c-8139-de16026c8868" (UID: "fd61fd12-7479-477c-8139-de16026c8868"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.776719 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd61fd12-7479-477c-8139-de16026c8868-kube-api-access-trdzl" (OuterVolumeSpecName: "kube-api-access-trdzl") pod "fd61fd12-7479-477c-8139-de16026c8868" (UID: "fd61fd12-7479-477c-8139-de16026c8868"). InnerVolumeSpecName "kube-api-access-trdzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.798283 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fd61fd12-7479-477c-8139-de16026c8868" (UID: "fd61fd12-7479-477c-8139-de16026c8868"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.800297 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-inventory" (OuterVolumeSpecName: "inventory") pod "fd61fd12-7479-477c-8139-de16026c8868" (UID: "fd61fd12-7479-477c-8139-de16026c8868"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.871031 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trdzl\" (UniqueName: \"kubernetes.io/projected/fd61fd12-7479-477c-8139-de16026c8868-kube-api-access-trdzl\") on node \"crc\" DevicePath \"\"" Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.871062 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.871073 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:28:45 crc kubenswrapper[4823]: I0126 15:28:45.871104 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd61fd12-7479-477c-8139-de16026c8868-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.304621 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" event={"ID":"fd61fd12-7479-477c-8139-de16026c8868","Type":"ContainerDied","Data":"66684758f8ba7bba1c55267b6be8bc66b29897eb95af4a753ce6437ccf23fcb8"} Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.304690 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-j6rkd" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.304706 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66684758f8ba7bba1c55267b6be8bc66b29897eb95af4a753ce6437ccf23fcb8" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.427806 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb"] Jan 26 15:28:46 crc kubenswrapper[4823]: E0126 15:28:46.428651 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd61fd12-7479-477c-8139-de16026c8868" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.428684 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd61fd12-7479-477c-8139-de16026c8868" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.428939 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd61fd12-7479-477c-8139-de16026c8868" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.430013 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.433676 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.434007 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.434122 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.439481 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb"] Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.444187 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.444458 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:28:46 crc kubenswrapper[4823]: E0126 15:28:46.479593 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd61fd12_7479_477c_8139_de16026c8868.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd61fd12_7479_477c_8139_de16026c8868.slice/crio-66684758f8ba7bba1c55267b6be8bc66b29897eb95af4a753ce6437ccf23fcb8\": RecentStats: unable to find data in memory cache]" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.584514 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw6qx\" (UniqueName: \"kubernetes.io/projected/dd40af06-84f8-4f72-86b7-ca918c279d1d-kube-api-access-kw6qx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.584969 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.585036 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.585058 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.687174 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw6qx\" (UniqueName: \"kubernetes.io/projected/dd40af06-84f8-4f72-86b7-ca918c279d1d-kube-api-access-kw6qx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.687462 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.687557 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.687578 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.692048 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.692805 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.694250 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.724981 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw6qx\" (UniqueName: \"kubernetes.io/projected/dd40af06-84f8-4f72-86b7-ca918c279d1d-kube-api-access-kw6qx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:46 crc kubenswrapper[4823]: I0126 15:28:46.774196 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:47 crc kubenswrapper[4823]: I0126 15:28:47.346996 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb"] Jan 26 15:28:48 crc kubenswrapper[4823]: I0126 15:28:48.326056 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" event={"ID":"dd40af06-84f8-4f72-86b7-ca918c279d1d","Type":"ContainerStarted","Data":"64238cee624c76d9b03c9860c474dd42f24c0d3bed84fdf2c334021583ddc54b"} Jan 26 15:28:48 crc kubenswrapper[4823]: I0126 15:28:48.326658 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" event={"ID":"dd40af06-84f8-4f72-86b7-ca918c279d1d","Type":"ContainerStarted","Data":"7af6ed06b4a32b63c881e974811e01c359b0a5402bb4a098d51073e2af979140"} Jan 26 15:28:48 crc kubenswrapper[4823]: I0126 15:28:48.357723 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" podStartSLOduration=1.914969149 podStartE2EDuration="2.357701353s" podCreationTimestamp="2026-01-26 15:28:46 +0000 UTC" firstStartedPulling="2026-01-26 15:28:47.352019761 +0000 UTC m=+2524.037482866" lastFinishedPulling="2026-01-26 15:28:47.794751965 +0000 UTC m=+2524.480215070" observedRunningTime="2026-01-26 15:28:48.347301053 +0000 UTC m=+2525.032764168" watchObservedRunningTime="2026-01-26 15:28:48.357701353 +0000 UTC m=+2525.043164458" Jan 26 15:28:52 crc kubenswrapper[4823]: I0126 15:28:52.560290 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:28:52 crc kubenswrapper[4823]: E0126 15:28:52.561221 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:28:58 crc kubenswrapper[4823]: I0126 15:28:58.425022 4823 generic.go:334] "Generic (PLEG): container finished" podID="dd40af06-84f8-4f72-86b7-ca918c279d1d" containerID="64238cee624c76d9b03c9860c474dd42f24c0d3bed84fdf2c334021583ddc54b" exitCode=0 Jan 26 15:28:58 crc kubenswrapper[4823]: I0126 15:28:58.425172 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" event={"ID":"dd40af06-84f8-4f72-86b7-ca918c279d1d","Type":"ContainerDied","Data":"64238cee624c76d9b03c9860c474dd42f24c0d3bed84fdf2c334021583ddc54b"} Jan 26 15:28:59 crc kubenswrapper[4823]: I0126 15:28:59.862155 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:28:59 crc kubenswrapper[4823]: I0126 15:28:59.945449 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-ceph\") pod \"dd40af06-84f8-4f72-86b7-ca918c279d1d\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " Jan 26 15:28:59 crc kubenswrapper[4823]: I0126 15:28:59.945567 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-inventory\") pod \"dd40af06-84f8-4f72-86b7-ca918c279d1d\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " Jan 26 15:28:59 crc kubenswrapper[4823]: I0126 15:28:59.945777 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw6qx\" (UniqueName: \"kubernetes.io/projected/dd40af06-84f8-4f72-86b7-ca918c279d1d-kube-api-access-kw6qx\") pod \"dd40af06-84f8-4f72-86b7-ca918c279d1d\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " Jan 26 15:28:59 crc kubenswrapper[4823]: I0126 15:28:59.945803 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-ssh-key-openstack-edpm-ipam\") pod \"dd40af06-84f8-4f72-86b7-ca918c279d1d\" (UID: \"dd40af06-84f8-4f72-86b7-ca918c279d1d\") " Jan 26 15:28:59 crc kubenswrapper[4823]: I0126 15:28:59.951964 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-ceph" (OuterVolumeSpecName: "ceph") pod "dd40af06-84f8-4f72-86b7-ca918c279d1d" (UID: "dd40af06-84f8-4f72-86b7-ca918c279d1d"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:28:59 crc kubenswrapper[4823]: I0126 15:28:59.953683 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd40af06-84f8-4f72-86b7-ca918c279d1d-kube-api-access-kw6qx" (OuterVolumeSpecName: "kube-api-access-kw6qx") pod "dd40af06-84f8-4f72-86b7-ca918c279d1d" (UID: "dd40af06-84f8-4f72-86b7-ca918c279d1d"). InnerVolumeSpecName "kube-api-access-kw6qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:28:59 crc kubenswrapper[4823]: I0126 15:28:59.976922 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "dd40af06-84f8-4f72-86b7-ca918c279d1d" (UID: "dd40af06-84f8-4f72-86b7-ca918c279d1d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:28:59 crc kubenswrapper[4823]: I0126 15:28:59.977318 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-inventory" (OuterVolumeSpecName: "inventory") pod "dd40af06-84f8-4f72-86b7-ca918c279d1d" (UID: "dd40af06-84f8-4f72-86b7-ca918c279d1d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.047883 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.047928 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.047941 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw6qx\" (UniqueName: \"kubernetes.io/projected/dd40af06-84f8-4f72-86b7-ca918c279d1d-kube-api-access-kw6qx\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.047956 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/dd40af06-84f8-4f72-86b7-ca918c279d1d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.442613 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" event={"ID":"dd40af06-84f8-4f72-86b7-ca918c279d1d","Type":"ContainerDied","Data":"7af6ed06b4a32b63c881e974811e01c359b0a5402bb4a098d51073e2af979140"} Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.442663 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7af6ed06b4a32b63c881e974811e01c359b0a5402bb4a098d51073e2af979140" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.442694 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.518879 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47"] Jan 26 15:29:00 crc kubenswrapper[4823]: E0126 15:29:00.519248 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd40af06-84f8-4f72-86b7-ca918c279d1d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.519266 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd40af06-84f8-4f72-86b7-ca918c279d1d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.519488 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd40af06-84f8-4f72-86b7-ca918c279d1d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.520083 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.523215 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.524784 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.525019 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.525152 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.525165 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.525245 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.525381 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.525516 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.533928 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47"] Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658255 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658327 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658407 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658435 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658501 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658568 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658587 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658629 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjt4r\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-kube-api-access-hjt4r\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658649 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658681 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658755 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658826 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.658871 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.759899 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.759971 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.760000 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.760035 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.760058 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.760088 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.760106 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.760131 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.760160 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.760180 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.760205 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjt4r\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-kube-api-access-hjt4r\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.760223 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.760245 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.764266 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.764487 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.764560 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.764587 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.764921 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.765170 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.766655 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.766733 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.766919 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.767183 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.778006 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.778053 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.781730 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjt4r\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-kube-api-access-hjt4r\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-f9b47\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:00 crc kubenswrapper[4823]: I0126 15:29:00.837143 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:01 crc kubenswrapper[4823]: I0126 15:29:01.348686 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47"] Jan 26 15:29:01 crc kubenswrapper[4823]: I0126 15:29:01.452496 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" event={"ID":"6b8ae9fa-6766-46fe-9729-3997384f9b41","Type":"ContainerStarted","Data":"e49bce1bfc00f9341bb9aaa09ef6edd62862ec17c28b0548b914afa144c73fc1"} Jan 26 15:29:02 crc kubenswrapper[4823]: I0126 15:29:02.462020 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" event={"ID":"6b8ae9fa-6766-46fe-9729-3997384f9b41","Type":"ContainerStarted","Data":"a8eaca6923121ac293e86ca31eccb3cf96100ca622fe891257f8dab0bb6dd046"} Jan 26 15:29:02 crc kubenswrapper[4823]: I0126 15:29:02.493408 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" podStartSLOduration=2.075194635 podStartE2EDuration="2.493392797s" podCreationTimestamp="2026-01-26 15:29:00 +0000 UTC" firstStartedPulling="2026-01-26 15:29:01.354712308 +0000 UTC m=+2538.040175413" lastFinishedPulling="2026-01-26 15:29:01.77291047 +0000 UTC m=+2538.458373575" observedRunningTime="2026-01-26 15:29:02.487013215 +0000 UTC m=+2539.172476320" watchObservedRunningTime="2026-01-26 15:29:02.493392797 +0000 UTC m=+2539.178855902" Jan 26 15:29:04 crc kubenswrapper[4823]: I0126 15:29:04.560776 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:29:04 crc kubenswrapper[4823]: E0126 15:29:04.561554 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:29:18 crc kubenswrapper[4823]: I0126 15:29:18.560932 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:29:18 crc kubenswrapper[4823]: E0126 15:29:18.562710 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:29:30 crc kubenswrapper[4823]: I0126 15:29:30.560702 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:29:30 crc kubenswrapper[4823]: E0126 15:29:30.561426 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:29:31 crc kubenswrapper[4823]: I0126 15:29:31.716106 4823 generic.go:334] "Generic (PLEG): container finished" podID="6b8ae9fa-6766-46fe-9729-3997384f9b41" containerID="a8eaca6923121ac293e86ca31eccb3cf96100ca622fe891257f8dab0bb6dd046" exitCode=0 Jan 26 15:29:31 crc kubenswrapper[4823]: I0126 15:29:31.716193 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" event={"ID":"6b8ae9fa-6766-46fe-9729-3997384f9b41","Type":"ContainerDied","Data":"a8eaca6923121ac293e86ca31eccb3cf96100ca622fe891257f8dab0bb6dd046"} Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.130223 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.221710 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-nova-combined-ca-bundle\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.221783 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-ovn-default-certs-0\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.221818 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-libvirt-combined-ca-bundle\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.221896 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ssh-key-openstack-edpm-ipam\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.221935 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjt4r\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-kube-api-access-hjt4r\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.221957 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.221985 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.222003 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-inventory\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.222024 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-neutron-metadata-combined-ca-bundle\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.222058 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-repo-setup-combined-ca-bundle\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.222074 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ovn-combined-ca-bundle\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.222143 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-bootstrap-combined-ca-bundle\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.222254 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ceph\") pod \"6b8ae9fa-6766-46fe-9729-3997384f9b41\" (UID: \"6b8ae9fa-6766-46fe-9729-3997384f9b41\") " Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.228969 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.229233 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.229698 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.229793 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.232252 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.234352 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ceph" (OuterVolumeSpecName: "ceph") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.234574 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-kube-api-access-hjt4r" (OuterVolumeSpecName: "kube-api-access-hjt4r") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "kube-api-access-hjt4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.234985 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.235192 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.235292 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.235583 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.257054 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-inventory" (OuterVolumeSpecName: "inventory") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.268173 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6b8ae9fa-6766-46fe-9729-3997384f9b41" (UID: "6b8ae9fa-6766-46fe-9729-3997384f9b41"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.325791 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.325956 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.325973 4823 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.325986 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.326000 4823 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.326024 4823 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.326034 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.326044 4823 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.326055 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.326067 4823 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.326077 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6b8ae9fa-6766-46fe-9729-3997384f9b41-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.326087 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjt4r\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-kube-api-access-hjt4r\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.326105 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/6b8ae9fa-6766-46fe-9729-3997384f9b41-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.738723 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" event={"ID":"6b8ae9fa-6766-46fe-9729-3997384f9b41","Type":"ContainerDied","Data":"e49bce1bfc00f9341bb9aaa09ef6edd62862ec17c28b0548b914afa144c73fc1"} Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.738844 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e49bce1bfc00f9341bb9aaa09ef6edd62862ec17c28b0548b914afa144c73fc1" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.738880 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-f9b47" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.882327 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk"] Jan 26 15:29:33 crc kubenswrapper[4823]: E0126 15:29:33.883397 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b8ae9fa-6766-46fe-9729-3997384f9b41" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.883439 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b8ae9fa-6766-46fe-9729-3997384f9b41" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.884006 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b8ae9fa-6766-46fe-9729-3997384f9b41" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.892522 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.892630 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk"] Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.895021 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.896739 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.896985 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.897203 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.897460 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.943700 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.943831 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.943877 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:33 crc kubenswrapper[4823]: I0126 15:29:33.943901 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5kzj\" (UniqueName: \"kubernetes.io/projected/3668c188-085e-4a02-8847-a4ccbd1ab067-kube-api-access-g5kzj\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:34 crc kubenswrapper[4823]: I0126 15:29:34.044566 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:34 crc kubenswrapper[4823]: I0126 15:29:34.044711 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:34 crc kubenswrapper[4823]: I0126 15:29:34.044753 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5kzj\" (UniqueName: \"kubernetes.io/projected/3668c188-085e-4a02-8847-a4ccbd1ab067-kube-api-access-g5kzj\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:34 crc kubenswrapper[4823]: I0126 15:29:34.044776 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:34 crc kubenswrapper[4823]: I0126 15:29:34.049651 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:34 crc kubenswrapper[4823]: I0126 15:29:34.057061 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:34 crc kubenswrapper[4823]: I0126 15:29:34.064335 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-ssh-key-openstack-edpm-ipam\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:34 crc kubenswrapper[4823]: I0126 15:29:34.065284 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5kzj\" (UniqueName: \"kubernetes.io/projected/3668c188-085e-4a02-8847-a4ccbd1ab067-kube-api-access-g5kzj\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:34 crc kubenswrapper[4823]: I0126 15:29:34.218196 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:34 crc kubenswrapper[4823]: I0126 15:29:34.771221 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk"] Jan 26 15:29:35 crc kubenswrapper[4823]: I0126 15:29:35.758631 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" event={"ID":"3668c188-085e-4a02-8847-a4ccbd1ab067","Type":"ContainerStarted","Data":"c6d9214b6d8783ba435d260e8413bbc4fc88fd2cc009b42ad879ef97363d0350"} Jan 26 15:29:36 crc kubenswrapper[4823]: I0126 15:29:36.768907 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" event={"ID":"3668c188-085e-4a02-8847-a4ccbd1ab067","Type":"ContainerStarted","Data":"2550d0bb493195ad379b2ad2c59c982873eb0b5908a1e4f6665cdfb36081bb45"} Jan 26 15:29:36 crc kubenswrapper[4823]: I0126 15:29:36.792613 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" podStartSLOduration=3.127347905 podStartE2EDuration="3.792594243s" podCreationTimestamp="2026-01-26 15:29:33 +0000 UTC" firstStartedPulling="2026-01-26 15:29:34.776260904 +0000 UTC m=+2571.461724009" lastFinishedPulling="2026-01-26 15:29:35.441507242 +0000 UTC m=+2572.126970347" observedRunningTime="2026-01-26 15:29:36.782315745 +0000 UTC m=+2573.467778850" watchObservedRunningTime="2026-01-26 15:29:36.792594243 +0000 UTC m=+2573.478057348" Jan 26 15:29:41 crc kubenswrapper[4823]: I0126 15:29:41.811098 4823 generic.go:334] "Generic (PLEG): container finished" podID="3668c188-085e-4a02-8847-a4ccbd1ab067" containerID="2550d0bb493195ad379b2ad2c59c982873eb0b5908a1e4f6665cdfb36081bb45" exitCode=0 Jan 26 15:29:41 crc kubenswrapper[4823]: I0126 15:29:41.811203 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" event={"ID":"3668c188-085e-4a02-8847-a4ccbd1ab067","Type":"ContainerDied","Data":"2550d0bb493195ad379b2ad2c59c982873eb0b5908a1e4f6665cdfb36081bb45"} Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.235938 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.345154 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-ssh-key-openstack-edpm-ipam\") pod \"3668c188-085e-4a02-8847-a4ccbd1ab067\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.345562 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5kzj\" (UniqueName: \"kubernetes.io/projected/3668c188-085e-4a02-8847-a4ccbd1ab067-kube-api-access-g5kzj\") pod \"3668c188-085e-4a02-8847-a4ccbd1ab067\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.345614 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-ceph\") pod \"3668c188-085e-4a02-8847-a4ccbd1ab067\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.345736 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-inventory\") pod \"3668c188-085e-4a02-8847-a4ccbd1ab067\" (UID: \"3668c188-085e-4a02-8847-a4ccbd1ab067\") " Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.375832 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-ceph" (OuterVolumeSpecName: "ceph") pod "3668c188-085e-4a02-8847-a4ccbd1ab067" (UID: "3668c188-085e-4a02-8847-a4ccbd1ab067"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.375950 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3668c188-085e-4a02-8847-a4ccbd1ab067-kube-api-access-g5kzj" (OuterVolumeSpecName: "kube-api-access-g5kzj") pod "3668c188-085e-4a02-8847-a4ccbd1ab067" (UID: "3668c188-085e-4a02-8847-a4ccbd1ab067"). InnerVolumeSpecName "kube-api-access-g5kzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.379412 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-inventory" (OuterVolumeSpecName: "inventory") pod "3668c188-085e-4a02-8847-a4ccbd1ab067" (UID: "3668c188-085e-4a02-8847-a4ccbd1ab067"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.380132 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3668c188-085e-4a02-8847-a4ccbd1ab067" (UID: "3668c188-085e-4a02-8847-a4ccbd1ab067"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.449674 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.449749 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.449762 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3668c188-085e-4a02-8847-a4ccbd1ab067-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.449773 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5kzj\" (UniqueName: \"kubernetes.io/projected/3668c188-085e-4a02-8847-a4ccbd1ab067-kube-api-access-g5kzj\") on node \"crc\" DevicePath \"\"" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.834953 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" event={"ID":"3668c188-085e-4a02-8847-a4ccbd1ab067","Type":"ContainerDied","Data":"c6d9214b6d8783ba435d260e8413bbc4fc88fd2cc009b42ad879ef97363d0350"} Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.835001 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6d9214b6d8783ba435d260e8413bbc4fc88fd2cc009b42ad879ef97363d0350" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.835078 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.914711 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl"] Jan 26 15:29:43 crc kubenswrapper[4823]: E0126 15:29:43.915087 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3668c188-085e-4a02-8847-a4ccbd1ab067" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.915106 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3668c188-085e-4a02-8847-a4ccbd1ab067" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.915290 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3668c188-085e-4a02-8847-a4ccbd1ab067" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.915926 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.918020 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.918199 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.918443 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.918521 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.918463 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.919884 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 26 15:29:43 crc kubenswrapper[4823]: I0126 15:29:43.933846 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl"] Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.059804 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.059932 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.060001 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/540d0393-5844-4d2f-bc69-88a5dd952af0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.060033 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.060102 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.060160 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jcl5\" (UniqueName: \"kubernetes.io/projected/540d0393-5844-4d2f-bc69-88a5dd952af0-kube-api-access-4jcl5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.162265 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/540d0393-5844-4d2f-bc69-88a5dd952af0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.162382 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.162437 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.162486 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jcl5\" (UniqueName: \"kubernetes.io/projected/540d0393-5844-4d2f-bc69-88a5dd952af0-kube-api-access-4jcl5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.162550 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.162621 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.164999 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/540d0393-5844-4d2f-bc69-88a5dd952af0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.167420 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.182347 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.182464 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.182654 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.192121 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jcl5\" (UniqueName: \"kubernetes.io/projected/540d0393-5844-4d2f-bc69-88a5dd952af0-kube-api-access-4jcl5\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-bz7sl\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.230595 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.740525 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl"] Jan 26 15:29:44 crc kubenswrapper[4823]: I0126 15:29:44.842357 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" event={"ID":"540d0393-5844-4d2f-bc69-88a5dd952af0","Type":"ContainerStarted","Data":"c8eda02856eb50923d5a663641fde6624b22aa594e00e5a4c17fc13debfd3062"} Jan 26 15:29:45 crc kubenswrapper[4823]: I0126 15:29:45.560558 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:29:45 crc kubenswrapper[4823]: E0126 15:29:45.561127 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:29:45 crc kubenswrapper[4823]: I0126 15:29:45.850835 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" event={"ID":"540d0393-5844-4d2f-bc69-88a5dd952af0","Type":"ContainerStarted","Data":"3acb2630e84cbc4c5fc7adf11d1285aa71757b7c21bf48214775e25234ba3a7f"} Jan 26 15:29:45 crc kubenswrapper[4823]: I0126 15:29:45.876456 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" podStartSLOduration=2.4227346450000002 podStartE2EDuration="2.876435315s" podCreationTimestamp="2026-01-26 15:29:43 +0000 UTC" firstStartedPulling="2026-01-26 15:29:44.740297963 +0000 UTC m=+2581.425761068" lastFinishedPulling="2026-01-26 15:29:45.193998633 +0000 UTC m=+2581.879461738" observedRunningTime="2026-01-26 15:29:45.869162098 +0000 UTC m=+2582.554625203" watchObservedRunningTime="2026-01-26 15:29:45.876435315 +0000 UTC m=+2582.561898580" Jan 26 15:29:57 crc kubenswrapper[4823]: I0126 15:29:57.560311 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:29:57 crc kubenswrapper[4823]: E0126 15:29:57.561020 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.158714 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d"] Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.160182 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.163575 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.163613 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.175996 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d"] Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.264683 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s6l6\" (UniqueName: \"kubernetes.io/projected/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-kube-api-access-5s6l6\") pod \"collect-profiles-29490690-4jm9d\" (UID: \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.264756 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-config-volume\") pod \"collect-profiles-29490690-4jm9d\" (UID: \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.264865 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-secret-volume\") pod \"collect-profiles-29490690-4jm9d\" (UID: \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.366379 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-config-volume\") pod \"collect-profiles-29490690-4jm9d\" (UID: \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.366528 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-secret-volume\") pod \"collect-profiles-29490690-4jm9d\" (UID: \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.366630 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6l6\" (UniqueName: \"kubernetes.io/projected/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-kube-api-access-5s6l6\") pod \"collect-profiles-29490690-4jm9d\" (UID: \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.368113 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-config-volume\") pod \"collect-profiles-29490690-4jm9d\" (UID: \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.387286 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-secret-volume\") pod \"collect-profiles-29490690-4jm9d\" (UID: \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.422409 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s6l6\" (UniqueName: \"kubernetes.io/projected/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-kube-api-access-5s6l6\") pod \"collect-profiles-29490690-4jm9d\" (UID: \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.484083 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:00 crc kubenswrapper[4823]: I0126 15:30:00.989204 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d"] Jan 26 15:30:00 crc kubenswrapper[4823]: W0126 15:30:00.996793 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2af0e3cf_a6b5_4d9e_9077_a14b5dae054b.slice/crio-de65d5ecbc61756b8547f61ae6374960890f976d815206bf77e48342431d7389 WatchSource:0}: Error finding container de65d5ecbc61756b8547f61ae6374960890f976d815206bf77e48342431d7389: Status 404 returned error can't find the container with id de65d5ecbc61756b8547f61ae6374960890f976d815206bf77e48342431d7389 Jan 26 15:30:01 crc kubenswrapper[4823]: I0126 15:30:01.971402 4823 generic.go:334] "Generic (PLEG): container finished" podID="2af0e3cf-a6b5-4d9e-9077-a14b5dae054b" containerID="1fddaa5cdc847a20258746f67aa957ab544ed88413fb8df68dfbb9f17a23e4fe" exitCode=0 Jan 26 15:30:01 crc kubenswrapper[4823]: I0126 15:30:01.971456 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" event={"ID":"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b","Type":"ContainerDied","Data":"1fddaa5cdc847a20258746f67aa957ab544ed88413fb8df68dfbb9f17a23e4fe"} Jan 26 15:30:01 crc kubenswrapper[4823]: I0126 15:30:01.971705 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" event={"ID":"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b","Type":"ContainerStarted","Data":"de65d5ecbc61756b8547f61ae6374960890f976d815206bf77e48342431d7389"} Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.314463 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.428797 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-config-volume\") pod \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\" (UID: \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\") " Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.428900 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-secret-volume\") pod \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\" (UID: \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\") " Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.429104 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s6l6\" (UniqueName: \"kubernetes.io/projected/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-kube-api-access-5s6l6\") pod \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\" (UID: \"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b\") " Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.429777 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-config-volume" (OuterVolumeSpecName: "config-volume") pod "2af0e3cf-a6b5-4d9e-9077-a14b5dae054b" (UID: "2af0e3cf-a6b5-4d9e-9077-a14b5dae054b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.438849 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-kube-api-access-5s6l6" (OuterVolumeSpecName: "kube-api-access-5s6l6") pod "2af0e3cf-a6b5-4d9e-9077-a14b5dae054b" (UID: "2af0e3cf-a6b5-4d9e-9077-a14b5dae054b"). InnerVolumeSpecName "kube-api-access-5s6l6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.438848 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2af0e3cf-a6b5-4d9e-9077-a14b5dae054b" (UID: "2af0e3cf-a6b5-4d9e-9077-a14b5dae054b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.531083 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.531120 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.531130 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s6l6\" (UniqueName: \"kubernetes.io/projected/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b-kube-api-access-5s6l6\") on node \"crc\" DevicePath \"\"" Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.990671 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" event={"ID":"2af0e3cf-a6b5-4d9e-9077-a14b5dae054b","Type":"ContainerDied","Data":"de65d5ecbc61756b8547f61ae6374960890f976d815206bf77e48342431d7389"} Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.991009 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de65d5ecbc61756b8547f61ae6374960890f976d815206bf77e48342431d7389" Jan 26 15:30:03 crc kubenswrapper[4823]: I0126 15:30:03.991062 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d" Jan 26 15:30:04 crc kubenswrapper[4823]: I0126 15:30:04.398774 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54"] Jan 26 15:30:04 crc kubenswrapper[4823]: I0126 15:30:04.405955 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490645-7md54"] Jan 26 15:30:05 crc kubenswrapper[4823]: I0126 15:30:05.575901 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c67988c-1152-41a0-8f2d-2d3a5eb12c46" path="/var/lib/kubelet/pods/1c67988c-1152-41a0-8f2d-2d3a5eb12c46/volumes" Jan 26 15:30:06 crc kubenswrapper[4823]: I0126 15:30:06.846712 4823 scope.go:117] "RemoveContainer" containerID="8d9f5a5d2dea66e98bdbb18ec0c7f4c0619a6b9a041187cb89aeb36ae237e447" Jan 26 15:30:09 crc kubenswrapper[4823]: I0126 15:30:09.560873 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:30:09 crc kubenswrapper[4823]: E0126 15:30:09.561747 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:30:22 crc kubenswrapper[4823]: I0126 15:30:22.560743 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:30:22 crc kubenswrapper[4823]: E0126 15:30:22.561539 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:30:37 crc kubenswrapper[4823]: I0126 15:30:37.560787 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:30:38 crc kubenswrapper[4823]: I0126 15:30:38.273455 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"451190c06cb1a40bf0bb818365234b55e1c3a1335c546ecb76fd72050c9e629f"} Jan 26 15:30:56 crc kubenswrapper[4823]: I0126 15:30:56.426034 4823 generic.go:334] "Generic (PLEG): container finished" podID="540d0393-5844-4d2f-bc69-88a5dd952af0" containerID="3acb2630e84cbc4c5fc7adf11d1285aa71757b7c21bf48214775e25234ba3a7f" exitCode=0 Jan 26 15:30:56 crc kubenswrapper[4823]: I0126 15:30:56.426120 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" event={"ID":"540d0393-5844-4d2f-bc69-88a5dd952af0","Type":"ContainerDied","Data":"3acb2630e84cbc4c5fc7adf11d1285aa71757b7c21bf48214775e25234ba3a7f"} Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.851714 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.948343 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jcl5\" (UniqueName: \"kubernetes.io/projected/540d0393-5844-4d2f-bc69-88a5dd952af0-kube-api-access-4jcl5\") pod \"540d0393-5844-4d2f-bc69-88a5dd952af0\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.948431 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ceph\") pod \"540d0393-5844-4d2f-bc69-88a5dd952af0\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.948474 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-inventory\") pod \"540d0393-5844-4d2f-bc69-88a5dd952af0\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.948572 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ovn-combined-ca-bundle\") pod \"540d0393-5844-4d2f-bc69-88a5dd952af0\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.948621 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/540d0393-5844-4d2f-bc69-88a5dd952af0-ovncontroller-config-0\") pod \"540d0393-5844-4d2f-bc69-88a5dd952af0\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.948660 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ssh-key-openstack-edpm-ipam\") pod \"540d0393-5844-4d2f-bc69-88a5dd952af0\" (UID: \"540d0393-5844-4d2f-bc69-88a5dd952af0\") " Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.955562 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ceph" (OuterVolumeSpecName: "ceph") pod "540d0393-5844-4d2f-bc69-88a5dd952af0" (UID: "540d0393-5844-4d2f-bc69-88a5dd952af0"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.955850 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/540d0393-5844-4d2f-bc69-88a5dd952af0-kube-api-access-4jcl5" (OuterVolumeSpecName: "kube-api-access-4jcl5") pod "540d0393-5844-4d2f-bc69-88a5dd952af0" (UID: "540d0393-5844-4d2f-bc69-88a5dd952af0"). InnerVolumeSpecName "kube-api-access-4jcl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.957125 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "540d0393-5844-4d2f-bc69-88a5dd952af0" (UID: "540d0393-5844-4d2f-bc69-88a5dd952af0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.978142 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/540d0393-5844-4d2f-bc69-88a5dd952af0-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "540d0393-5844-4d2f-bc69-88a5dd952af0" (UID: "540d0393-5844-4d2f-bc69-88a5dd952af0"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.982271 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "540d0393-5844-4d2f-bc69-88a5dd952af0" (UID: "540d0393-5844-4d2f-bc69-88a5dd952af0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:30:57 crc kubenswrapper[4823]: I0126 15:30:57.988022 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-inventory" (OuterVolumeSpecName: "inventory") pod "540d0393-5844-4d2f-bc69-88a5dd952af0" (UID: "540d0393-5844-4d2f-bc69-88a5dd952af0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.051508 4823 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.051785 4823 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/540d0393-5844-4d2f-bc69-88a5dd952af0-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.051873 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.051981 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jcl5\" (UniqueName: \"kubernetes.io/projected/540d0393-5844-4d2f-bc69-88a5dd952af0-kube-api-access-4jcl5\") on node \"crc\" DevicePath \"\"" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.052069 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.052150 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/540d0393-5844-4d2f-bc69-88a5dd952af0-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.442714 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" event={"ID":"540d0393-5844-4d2f-bc69-88a5dd952af0","Type":"ContainerDied","Data":"c8eda02856eb50923d5a663641fde6624b22aa594e00e5a4c17fc13debfd3062"} Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.442786 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8eda02856eb50923d5a663641fde6624b22aa594e00e5a4c17fc13debfd3062" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.442849 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-bz7sl" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.533292 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn"] Jan 26 15:30:58 crc kubenswrapper[4823]: E0126 15:30:58.533654 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="540d0393-5844-4d2f-bc69-88a5dd952af0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.533673 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="540d0393-5844-4d2f-bc69-88a5dd952af0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 15:30:58 crc kubenswrapper[4823]: E0126 15:30:58.533698 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2af0e3cf-a6b5-4d9e-9077-a14b5dae054b" containerName="collect-profiles" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.533705 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="2af0e3cf-a6b5-4d9e-9077-a14b5dae054b" containerName="collect-profiles" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.533881 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="540d0393-5844-4d2f-bc69-88a5dd952af0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.533903 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="2af0e3cf-a6b5-4d9e-9077-a14b5dae054b" containerName="collect-profiles" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.534490 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.536883 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.537116 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.537186 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.537352 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.537582 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.538190 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.544955 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.549528 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn"] Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.689668 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.689740 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.689800 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7zg8\" (UniqueName: \"kubernetes.io/projected/79d82d48-4498-49e0-b395-3d33c0ecdf1a-kube-api-access-t7zg8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.689856 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.689922 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.689960 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.689992 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.791061 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.791152 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7zg8\" (UniqueName: \"kubernetes.io/projected/79d82d48-4498-49e0-b395-3d33c0ecdf1a-kube-api-access-t7zg8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.791203 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.791262 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.791304 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.791325 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.791350 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.797048 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.797073 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.797716 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.798127 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.798200 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.798398 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.810342 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7zg8\" (UniqueName: \"kubernetes.io/projected/79d82d48-4498-49e0-b395-3d33c0ecdf1a-kube-api-access-t7zg8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:58 crc kubenswrapper[4823]: I0126 15:30:58.853090 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:30:59 crc kubenswrapper[4823]: I0126 15:30:59.358694 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn"] Jan 26 15:30:59 crc kubenswrapper[4823]: I0126 15:30:59.454622 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" event={"ID":"79d82d48-4498-49e0-b395-3d33c0ecdf1a","Type":"ContainerStarted","Data":"01d8aba39f34da6496f710124f81bf88073282b3b4bf64951cdf9525838d6c47"} Jan 26 15:31:00 crc kubenswrapper[4823]: I0126 15:31:00.463957 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" event={"ID":"79d82d48-4498-49e0-b395-3d33c0ecdf1a","Type":"ContainerStarted","Data":"b9d7d589d0acfe9d39b1e6dad165c399268cae9035c7956490a476fc43497a9d"} Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.463987 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" podStartSLOduration=37.707349875 podStartE2EDuration="38.463961237s" podCreationTimestamp="2026-01-26 15:30:58 +0000 UTC" firstStartedPulling="2026-01-26 15:30:59.362198715 +0000 UTC m=+2656.047661820" lastFinishedPulling="2026-01-26 15:31:00.118810077 +0000 UTC m=+2656.804273182" observedRunningTime="2026-01-26 15:31:00.483767353 +0000 UTC m=+2657.169230468" watchObservedRunningTime="2026-01-26 15:31:36.463961237 +0000 UTC m=+2693.149424382" Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.477761 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w6krh"] Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.479748 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.499227 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6krh"] Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.626896 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh952\" (UniqueName: \"kubernetes.io/projected/3b77e522-34d9-497a-ac03-f1ba17278023-kube-api-access-dh952\") pod \"redhat-marketplace-w6krh\" (UID: \"3b77e522-34d9-497a-ac03-f1ba17278023\") " pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.627380 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b77e522-34d9-497a-ac03-f1ba17278023-catalog-content\") pod \"redhat-marketplace-w6krh\" (UID: \"3b77e522-34d9-497a-ac03-f1ba17278023\") " pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.627478 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b77e522-34d9-497a-ac03-f1ba17278023-utilities\") pod \"redhat-marketplace-w6krh\" (UID: \"3b77e522-34d9-497a-ac03-f1ba17278023\") " pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.729217 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b77e522-34d9-497a-ac03-f1ba17278023-utilities\") pod \"redhat-marketplace-w6krh\" (UID: \"3b77e522-34d9-497a-ac03-f1ba17278023\") " pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.729345 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh952\" (UniqueName: \"kubernetes.io/projected/3b77e522-34d9-497a-ac03-f1ba17278023-kube-api-access-dh952\") pod \"redhat-marketplace-w6krh\" (UID: \"3b77e522-34d9-497a-ac03-f1ba17278023\") " pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.729498 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b77e522-34d9-497a-ac03-f1ba17278023-catalog-content\") pod \"redhat-marketplace-w6krh\" (UID: \"3b77e522-34d9-497a-ac03-f1ba17278023\") " pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.729731 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b77e522-34d9-497a-ac03-f1ba17278023-utilities\") pod \"redhat-marketplace-w6krh\" (UID: \"3b77e522-34d9-497a-ac03-f1ba17278023\") " pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.729849 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b77e522-34d9-497a-ac03-f1ba17278023-catalog-content\") pod \"redhat-marketplace-w6krh\" (UID: \"3b77e522-34d9-497a-ac03-f1ba17278023\") " pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.753698 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh952\" (UniqueName: \"kubernetes.io/projected/3b77e522-34d9-497a-ac03-f1ba17278023-kube-api-access-dh952\") pod \"redhat-marketplace-w6krh\" (UID: \"3b77e522-34d9-497a-ac03-f1ba17278023\") " pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:36 crc kubenswrapper[4823]: I0126 15:31:36.836406 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:37 crc kubenswrapper[4823]: I0126 15:31:37.298223 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6krh"] Jan 26 15:31:37 crc kubenswrapper[4823]: W0126 15:31:37.313753 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b77e522_34d9_497a_ac03_f1ba17278023.slice/crio-4572b05c2dfbd7600ed74df754df29114ad682c26d27862a32c13cf2b0ac2ed1 WatchSource:0}: Error finding container 4572b05c2dfbd7600ed74df754df29114ad682c26d27862a32c13cf2b0ac2ed1: Status 404 returned error can't find the container with id 4572b05c2dfbd7600ed74df754df29114ad682c26d27862a32c13cf2b0ac2ed1 Jan 26 15:31:37 crc kubenswrapper[4823]: I0126 15:31:37.763313 4823 generic.go:334] "Generic (PLEG): container finished" podID="3b77e522-34d9-497a-ac03-f1ba17278023" containerID="f5dd38b5cae4d79c79eb9e19af0e67f9017f0bc2cf99ffa990d49a29c5c37934" exitCode=0 Jan 26 15:31:37 crc kubenswrapper[4823]: I0126 15:31:37.763408 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6krh" event={"ID":"3b77e522-34d9-497a-ac03-f1ba17278023","Type":"ContainerDied","Data":"f5dd38b5cae4d79c79eb9e19af0e67f9017f0bc2cf99ffa990d49a29c5c37934"} Jan 26 15:31:37 crc kubenswrapper[4823]: I0126 15:31:37.763655 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6krh" event={"ID":"3b77e522-34d9-497a-ac03-f1ba17278023","Type":"ContainerStarted","Data":"4572b05c2dfbd7600ed74df754df29114ad682c26d27862a32c13cf2b0ac2ed1"} Jan 26 15:31:37 crc kubenswrapper[4823]: I0126 15:31:37.765275 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:31:38 crc kubenswrapper[4823]: I0126 15:31:38.773473 4823 generic.go:334] "Generic (PLEG): container finished" podID="3b77e522-34d9-497a-ac03-f1ba17278023" containerID="91236c202b300c548e2c84b36754e02622d687832d7cbbb0b190305874ee1f48" exitCode=0 Jan 26 15:31:38 crc kubenswrapper[4823]: I0126 15:31:38.773573 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6krh" event={"ID":"3b77e522-34d9-497a-ac03-f1ba17278023","Type":"ContainerDied","Data":"91236c202b300c548e2c84b36754e02622d687832d7cbbb0b190305874ee1f48"} Jan 26 15:31:39 crc kubenswrapper[4823]: I0126 15:31:39.783840 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6krh" event={"ID":"3b77e522-34d9-497a-ac03-f1ba17278023","Type":"ContainerStarted","Data":"55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6"} Jan 26 15:31:39 crc kubenswrapper[4823]: I0126 15:31:39.810622 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w6krh" podStartSLOduration=2.366299249 podStartE2EDuration="3.810594635s" podCreationTimestamp="2026-01-26 15:31:36 +0000 UTC" firstStartedPulling="2026-01-26 15:31:37.765009548 +0000 UTC m=+2694.450472653" lastFinishedPulling="2026-01-26 15:31:39.209304934 +0000 UTC m=+2695.894768039" observedRunningTime="2026-01-26 15:31:39.800124743 +0000 UTC m=+2696.485587878" watchObservedRunningTime="2026-01-26 15:31:39.810594635 +0000 UTC m=+2696.496057780" Jan 26 15:31:46 crc kubenswrapper[4823]: I0126 15:31:46.837686 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:46 crc kubenswrapper[4823]: I0126 15:31:46.839899 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:46 crc kubenswrapper[4823]: I0126 15:31:46.888328 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:47 crc kubenswrapper[4823]: I0126 15:31:47.911734 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:47 crc kubenswrapper[4823]: I0126 15:31:47.966731 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6krh"] Jan 26 15:31:49 crc kubenswrapper[4823]: I0126 15:31:49.872733 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w6krh" podUID="3b77e522-34d9-497a-ac03-f1ba17278023" containerName="registry-server" containerID="cri-o://55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6" gracePeriod=2 Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.323103 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.510655 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b77e522-34d9-497a-ac03-f1ba17278023-catalog-content\") pod \"3b77e522-34d9-497a-ac03-f1ba17278023\" (UID: \"3b77e522-34d9-497a-ac03-f1ba17278023\") " Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.510813 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b77e522-34d9-497a-ac03-f1ba17278023-utilities\") pod \"3b77e522-34d9-497a-ac03-f1ba17278023\" (UID: \"3b77e522-34d9-497a-ac03-f1ba17278023\") " Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.510883 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh952\" (UniqueName: \"kubernetes.io/projected/3b77e522-34d9-497a-ac03-f1ba17278023-kube-api-access-dh952\") pod \"3b77e522-34d9-497a-ac03-f1ba17278023\" (UID: \"3b77e522-34d9-497a-ac03-f1ba17278023\") " Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.511659 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b77e522-34d9-497a-ac03-f1ba17278023-utilities" (OuterVolumeSpecName: "utilities") pod "3b77e522-34d9-497a-ac03-f1ba17278023" (UID: "3b77e522-34d9-497a-ac03-f1ba17278023"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.516570 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b77e522-34d9-497a-ac03-f1ba17278023-kube-api-access-dh952" (OuterVolumeSpecName: "kube-api-access-dh952") pod "3b77e522-34d9-497a-ac03-f1ba17278023" (UID: "3b77e522-34d9-497a-ac03-f1ba17278023"). InnerVolumeSpecName "kube-api-access-dh952". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.532545 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b77e522-34d9-497a-ac03-f1ba17278023-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b77e522-34d9-497a-ac03-f1ba17278023" (UID: "3b77e522-34d9-497a-ac03-f1ba17278023"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.614225 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b77e522-34d9-497a-ac03-f1ba17278023-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.616189 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh952\" (UniqueName: \"kubernetes.io/projected/3b77e522-34d9-497a-ac03-f1ba17278023-kube-api-access-dh952\") on node \"crc\" DevicePath \"\"" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.616220 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b77e522-34d9-497a-ac03-f1ba17278023-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.881892 4823 generic.go:334] "Generic (PLEG): container finished" podID="3b77e522-34d9-497a-ac03-f1ba17278023" containerID="55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6" exitCode=0 Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.881939 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w6krh" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.881942 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6krh" event={"ID":"3b77e522-34d9-497a-ac03-f1ba17278023","Type":"ContainerDied","Data":"55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6"} Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.882073 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w6krh" event={"ID":"3b77e522-34d9-497a-ac03-f1ba17278023","Type":"ContainerDied","Data":"4572b05c2dfbd7600ed74df754df29114ad682c26d27862a32c13cf2b0ac2ed1"} Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.882147 4823 scope.go:117] "RemoveContainer" containerID="55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.909606 4823 scope.go:117] "RemoveContainer" containerID="91236c202b300c548e2c84b36754e02622d687832d7cbbb0b190305874ee1f48" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.922637 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6krh"] Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.930706 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w6krh"] Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.938817 4823 scope.go:117] "RemoveContainer" containerID="f5dd38b5cae4d79c79eb9e19af0e67f9017f0bc2cf99ffa990d49a29c5c37934" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.975530 4823 scope.go:117] "RemoveContainer" containerID="55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6" Jan 26 15:31:50 crc kubenswrapper[4823]: E0126 15:31:50.975971 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6\": container with ID starting with 55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6 not found: ID does not exist" containerID="55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.976030 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6"} err="failed to get container status \"55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6\": rpc error: code = NotFound desc = could not find container \"55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6\": container with ID starting with 55ab6698eddb44a5f4dec78ac1567839842732c7cc1345c2298a6ee81f169fc6 not found: ID does not exist" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.976065 4823 scope.go:117] "RemoveContainer" containerID="91236c202b300c548e2c84b36754e02622d687832d7cbbb0b190305874ee1f48" Jan 26 15:31:50 crc kubenswrapper[4823]: E0126 15:31:50.976465 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91236c202b300c548e2c84b36754e02622d687832d7cbbb0b190305874ee1f48\": container with ID starting with 91236c202b300c548e2c84b36754e02622d687832d7cbbb0b190305874ee1f48 not found: ID does not exist" containerID="91236c202b300c548e2c84b36754e02622d687832d7cbbb0b190305874ee1f48" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.976486 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91236c202b300c548e2c84b36754e02622d687832d7cbbb0b190305874ee1f48"} err="failed to get container status \"91236c202b300c548e2c84b36754e02622d687832d7cbbb0b190305874ee1f48\": rpc error: code = NotFound desc = could not find container \"91236c202b300c548e2c84b36754e02622d687832d7cbbb0b190305874ee1f48\": container with ID starting with 91236c202b300c548e2c84b36754e02622d687832d7cbbb0b190305874ee1f48 not found: ID does not exist" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.976498 4823 scope.go:117] "RemoveContainer" containerID="f5dd38b5cae4d79c79eb9e19af0e67f9017f0bc2cf99ffa990d49a29c5c37934" Jan 26 15:31:50 crc kubenswrapper[4823]: E0126 15:31:50.976814 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5dd38b5cae4d79c79eb9e19af0e67f9017f0bc2cf99ffa990d49a29c5c37934\": container with ID starting with f5dd38b5cae4d79c79eb9e19af0e67f9017f0bc2cf99ffa990d49a29c5c37934 not found: ID does not exist" containerID="f5dd38b5cae4d79c79eb9e19af0e67f9017f0bc2cf99ffa990d49a29c5c37934" Jan 26 15:31:50 crc kubenswrapper[4823]: I0126 15:31:50.976854 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5dd38b5cae4d79c79eb9e19af0e67f9017f0bc2cf99ffa990d49a29c5c37934"} err="failed to get container status \"f5dd38b5cae4d79c79eb9e19af0e67f9017f0bc2cf99ffa990d49a29c5c37934\": rpc error: code = NotFound desc = could not find container \"f5dd38b5cae4d79c79eb9e19af0e67f9017f0bc2cf99ffa990d49a29c5c37934\": container with ID starting with f5dd38b5cae4d79c79eb9e19af0e67f9017f0bc2cf99ffa990d49a29c5c37934 not found: ID does not exist" Jan 26 15:31:51 crc kubenswrapper[4823]: I0126 15:31:51.572222 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b77e522-34d9-497a-ac03-f1ba17278023" path="/var/lib/kubelet/pods/3b77e522-34d9-497a-ac03-f1ba17278023/volumes" Jan 26 15:32:00 crc kubenswrapper[4823]: I0126 15:32:00.974915 4823 generic.go:334] "Generic (PLEG): container finished" podID="79d82d48-4498-49e0-b395-3d33c0ecdf1a" containerID="b9d7d589d0acfe9d39b1e6dad165c399268cae9035c7956490a476fc43497a9d" exitCode=0 Jan 26 15:32:00 crc kubenswrapper[4823]: I0126 15:32:00.975010 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" event={"ID":"79d82d48-4498-49e0-b395-3d33c0ecdf1a","Type":"ContainerDied","Data":"b9d7d589d0acfe9d39b1e6dad165c399268cae9035c7956490a476fc43497a9d"} Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.475229 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.648584 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-neutron-metadata-combined-ca-bundle\") pod \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.648663 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-inventory\") pod \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.648781 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7zg8\" (UniqueName: \"kubernetes.io/projected/79d82d48-4498-49e0-b395-3d33c0ecdf1a-kube-api-access-t7zg8\") pod \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.648834 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.649332 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-ssh-key-openstack-edpm-ipam\") pod \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.649425 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-nova-metadata-neutron-config-0\") pod \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.649601 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-ceph\") pod \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\" (UID: \"79d82d48-4498-49e0-b395-3d33c0ecdf1a\") " Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.664712 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79d82d48-4498-49e0-b395-3d33c0ecdf1a-kube-api-access-t7zg8" (OuterVolumeSpecName: "kube-api-access-t7zg8") pod "79d82d48-4498-49e0-b395-3d33c0ecdf1a" (UID: "79d82d48-4498-49e0-b395-3d33c0ecdf1a"). InnerVolumeSpecName "kube-api-access-t7zg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.665261 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-ceph" (OuterVolumeSpecName: "ceph") pod "79d82d48-4498-49e0-b395-3d33c0ecdf1a" (UID: "79d82d48-4498-49e0-b395-3d33c0ecdf1a"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.665343 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "79d82d48-4498-49e0-b395-3d33c0ecdf1a" (UID: "79d82d48-4498-49e0-b395-3d33c0ecdf1a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.677935 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "79d82d48-4498-49e0-b395-3d33c0ecdf1a" (UID: "79d82d48-4498-49e0-b395-3d33c0ecdf1a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.678025 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79d82d48-4498-49e0-b395-3d33c0ecdf1a" (UID: "79d82d48-4498-49e0-b395-3d33c0ecdf1a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.678563 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "79d82d48-4498-49e0-b395-3d33c0ecdf1a" (UID: "79d82d48-4498-49e0-b395-3d33c0ecdf1a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.683872 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-inventory" (OuterVolumeSpecName: "inventory") pod "79d82d48-4498-49e0-b395-3d33c0ecdf1a" (UID: "79d82d48-4498-49e0-b395-3d33c0ecdf1a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.752683 4823 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.752736 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.752749 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7zg8\" (UniqueName: \"kubernetes.io/projected/79d82d48-4498-49e0-b395-3d33c0ecdf1a-kube-api-access-t7zg8\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.752760 4823 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.752773 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.752782 4823 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.752791 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/79d82d48-4498-49e0-b395-3d33c0ecdf1a-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.993227 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" event={"ID":"79d82d48-4498-49e0-b395-3d33c0ecdf1a","Type":"ContainerDied","Data":"01d8aba39f34da6496f710124f81bf88073282b3b4bf64951cdf9525838d6c47"} Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.993319 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01d8aba39f34da6496f710124f81bf88073282b3b4bf64951cdf9525838d6c47" Jan 26 15:32:02 crc kubenswrapper[4823]: I0126 15:32:02.993320 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.131932 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6"] Jan 26 15:32:03 crc kubenswrapper[4823]: E0126 15:32:03.132276 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b77e522-34d9-497a-ac03-f1ba17278023" containerName="extract-content" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.132293 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b77e522-34d9-497a-ac03-f1ba17278023" containerName="extract-content" Jan 26 15:32:03 crc kubenswrapper[4823]: E0126 15:32:03.132308 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b77e522-34d9-497a-ac03-f1ba17278023" containerName="extract-utilities" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.132315 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b77e522-34d9-497a-ac03-f1ba17278023" containerName="extract-utilities" Jan 26 15:32:03 crc kubenswrapper[4823]: E0126 15:32:03.132333 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79d82d48-4498-49e0-b395-3d33c0ecdf1a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.132340 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="79d82d48-4498-49e0-b395-3d33c0ecdf1a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 15:32:03 crc kubenswrapper[4823]: E0126 15:32:03.132350 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b77e522-34d9-497a-ac03-f1ba17278023" containerName="registry-server" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.132355 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b77e522-34d9-497a-ac03-f1ba17278023" containerName="registry-server" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.132541 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="79d82d48-4498-49e0-b395-3d33c0ecdf1a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.132555 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b77e522-34d9-497a-ac03-f1ba17278023" containerName="registry-server" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.133136 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.135585 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.135846 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.135989 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.136096 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.136250 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.142568 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.144608 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6"] Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.261616 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.262072 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.262206 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.262333 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.262483 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vk8l\" (UniqueName: \"kubernetes.io/projected/b2993a3c-5b24-475d-b1cf-38d4611f55fa-kube-api-access-7vk8l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.262636 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.363805 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.363853 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.363881 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.363909 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.363940 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vk8l\" (UniqueName: \"kubernetes.io/projected/b2993a3c-5b24-475d-b1cf-38d4611f55fa-kube-api-access-7vk8l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.363978 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.368972 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.369726 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.370978 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.376741 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.377745 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.386029 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vk8l\" (UniqueName: \"kubernetes.io/projected/b2993a3c-5b24-475d-b1cf-38d4611f55fa-kube-api-access-7vk8l\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:03 crc kubenswrapper[4823]: I0126 15:32:03.453842 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:32:04 crc kubenswrapper[4823]: I0126 15:32:04.037579 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6"] Jan 26 15:32:05 crc kubenswrapper[4823]: I0126 15:32:05.011051 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" event={"ID":"b2993a3c-5b24-475d-b1cf-38d4611f55fa","Type":"ContainerStarted","Data":"6d9fde1eae58e8cab5de2142427010169357082556c206201f95e7154b151bd5"} Jan 26 15:32:05 crc kubenswrapper[4823]: I0126 15:32:05.011347 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" event={"ID":"b2993a3c-5b24-475d-b1cf-38d4611f55fa","Type":"ContainerStarted","Data":"ec9ab27d2d190735ebfb63f9f7d3d067b7c23a347073b72eaad1595960d0f605"} Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.279017 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" podStartSLOduration=3.846124178 podStartE2EDuration="4.278996816s" podCreationTimestamp="2026-01-26 15:32:03 +0000 UTC" firstStartedPulling="2026-01-26 15:32:04.044600426 +0000 UTC m=+2720.730063531" lastFinishedPulling="2026-01-26 15:32:04.477473024 +0000 UTC m=+2721.162936169" observedRunningTime="2026-01-26 15:32:05.02738577 +0000 UTC m=+2721.712848895" watchObservedRunningTime="2026-01-26 15:32:07.278996816 +0000 UTC m=+2723.964459931" Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.288595 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rvnfn"] Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.290953 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.302093 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvnfn"] Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.355911 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762e202a-0798-4723-af7e-f0227cea1aba-catalog-content\") pod \"redhat-operators-rvnfn\" (UID: \"762e202a-0798-4723-af7e-f0227cea1aba\") " pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.355965 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjl9b\" (UniqueName: \"kubernetes.io/projected/762e202a-0798-4723-af7e-f0227cea1aba-kube-api-access-rjl9b\") pod \"redhat-operators-rvnfn\" (UID: \"762e202a-0798-4723-af7e-f0227cea1aba\") " pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.356187 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762e202a-0798-4723-af7e-f0227cea1aba-utilities\") pod \"redhat-operators-rvnfn\" (UID: \"762e202a-0798-4723-af7e-f0227cea1aba\") " pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.458226 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762e202a-0798-4723-af7e-f0227cea1aba-utilities\") pod \"redhat-operators-rvnfn\" (UID: \"762e202a-0798-4723-af7e-f0227cea1aba\") " pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.458392 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762e202a-0798-4723-af7e-f0227cea1aba-catalog-content\") pod \"redhat-operators-rvnfn\" (UID: \"762e202a-0798-4723-af7e-f0227cea1aba\") " pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.458430 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjl9b\" (UniqueName: \"kubernetes.io/projected/762e202a-0798-4723-af7e-f0227cea1aba-kube-api-access-rjl9b\") pod \"redhat-operators-rvnfn\" (UID: \"762e202a-0798-4723-af7e-f0227cea1aba\") " pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.459074 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762e202a-0798-4723-af7e-f0227cea1aba-utilities\") pod \"redhat-operators-rvnfn\" (UID: \"762e202a-0798-4723-af7e-f0227cea1aba\") " pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.459143 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762e202a-0798-4723-af7e-f0227cea1aba-catalog-content\") pod \"redhat-operators-rvnfn\" (UID: \"762e202a-0798-4723-af7e-f0227cea1aba\") " pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.482156 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjl9b\" (UniqueName: \"kubernetes.io/projected/762e202a-0798-4723-af7e-f0227cea1aba-kube-api-access-rjl9b\") pod \"redhat-operators-rvnfn\" (UID: \"762e202a-0798-4723-af7e-f0227cea1aba\") " pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:07 crc kubenswrapper[4823]: I0126 15:32:07.621181 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:08 crc kubenswrapper[4823]: I0126 15:32:08.115003 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rvnfn"] Jan 26 15:32:08 crc kubenswrapper[4823]: W0126 15:32:08.121988 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod762e202a_0798_4723_af7e_f0227cea1aba.slice/crio-718a5e6cfecc7d862871841232975f299e6043aed4e85ec63ff28233ba1bf77a WatchSource:0}: Error finding container 718a5e6cfecc7d862871841232975f299e6043aed4e85ec63ff28233ba1bf77a: Status 404 returned error can't find the container with id 718a5e6cfecc7d862871841232975f299e6043aed4e85ec63ff28233ba1bf77a Jan 26 15:32:09 crc kubenswrapper[4823]: I0126 15:32:09.045957 4823 generic.go:334] "Generic (PLEG): container finished" podID="762e202a-0798-4723-af7e-f0227cea1aba" containerID="c7fdc77f9ed9672a7f7f8f83c7b4ec168f84cff3c4a5cc4bcaf7424216c84a42" exitCode=0 Jan 26 15:32:09 crc kubenswrapper[4823]: I0126 15:32:09.046031 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvnfn" event={"ID":"762e202a-0798-4723-af7e-f0227cea1aba","Type":"ContainerDied","Data":"c7fdc77f9ed9672a7f7f8f83c7b4ec168f84cff3c4a5cc4bcaf7424216c84a42"} Jan 26 15:32:09 crc kubenswrapper[4823]: I0126 15:32:09.046958 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvnfn" event={"ID":"762e202a-0798-4723-af7e-f0227cea1aba","Type":"ContainerStarted","Data":"718a5e6cfecc7d862871841232975f299e6043aed4e85ec63ff28233ba1bf77a"} Jan 26 15:32:10 crc kubenswrapper[4823]: I0126 15:32:10.057217 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvnfn" event={"ID":"762e202a-0798-4723-af7e-f0227cea1aba","Type":"ContainerStarted","Data":"342cf06370ec6c9ac37de32a5947b601c490407291754d5adaee6d1724cd9f38"} Jan 26 15:32:11 crc kubenswrapper[4823]: I0126 15:32:11.069693 4823 generic.go:334] "Generic (PLEG): container finished" podID="762e202a-0798-4723-af7e-f0227cea1aba" containerID="342cf06370ec6c9ac37de32a5947b601c490407291754d5adaee6d1724cd9f38" exitCode=0 Jan 26 15:32:11 crc kubenswrapper[4823]: I0126 15:32:11.069753 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvnfn" event={"ID":"762e202a-0798-4723-af7e-f0227cea1aba","Type":"ContainerDied","Data":"342cf06370ec6c9ac37de32a5947b601c490407291754d5adaee6d1724cd9f38"} Jan 26 15:32:12 crc kubenswrapper[4823]: I0126 15:32:12.089242 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvnfn" event={"ID":"762e202a-0798-4723-af7e-f0227cea1aba","Type":"ContainerStarted","Data":"c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201"} Jan 26 15:32:17 crc kubenswrapper[4823]: I0126 15:32:17.622240 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:17 crc kubenswrapper[4823]: I0126 15:32:17.622877 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:17 crc kubenswrapper[4823]: I0126 15:32:17.665198 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:17 crc kubenswrapper[4823]: I0126 15:32:17.693276 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rvnfn" podStartSLOduration=8.16056401 podStartE2EDuration="10.69325254s" podCreationTimestamp="2026-01-26 15:32:07 +0000 UTC" firstStartedPulling="2026-01-26 15:32:09.049041329 +0000 UTC m=+2725.734504474" lastFinishedPulling="2026-01-26 15:32:11.581729889 +0000 UTC m=+2728.267193004" observedRunningTime="2026-01-26 15:32:12.113094604 +0000 UTC m=+2728.798557719" watchObservedRunningTime="2026-01-26 15:32:17.69325254 +0000 UTC m=+2734.378715645" Jan 26 15:32:18 crc kubenswrapper[4823]: I0126 15:32:18.188819 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:18 crc kubenswrapper[4823]: I0126 15:32:18.241496 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvnfn"] Jan 26 15:32:20 crc kubenswrapper[4823]: I0126 15:32:20.156199 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rvnfn" podUID="762e202a-0798-4723-af7e-f0227cea1aba" containerName="registry-server" containerID="cri-o://c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201" gracePeriod=2 Jan 26 15:32:20 crc kubenswrapper[4823]: I0126 15:32:20.647816 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:20 crc kubenswrapper[4823]: I0126 15:32:20.810189 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762e202a-0798-4723-af7e-f0227cea1aba-utilities\") pod \"762e202a-0798-4723-af7e-f0227cea1aba\" (UID: \"762e202a-0798-4723-af7e-f0227cea1aba\") " Jan 26 15:32:20 crc kubenswrapper[4823]: I0126 15:32:20.810330 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762e202a-0798-4723-af7e-f0227cea1aba-catalog-content\") pod \"762e202a-0798-4723-af7e-f0227cea1aba\" (UID: \"762e202a-0798-4723-af7e-f0227cea1aba\") " Jan 26 15:32:20 crc kubenswrapper[4823]: I0126 15:32:20.810450 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjl9b\" (UniqueName: \"kubernetes.io/projected/762e202a-0798-4723-af7e-f0227cea1aba-kube-api-access-rjl9b\") pod \"762e202a-0798-4723-af7e-f0227cea1aba\" (UID: \"762e202a-0798-4723-af7e-f0227cea1aba\") " Jan 26 15:32:20 crc kubenswrapper[4823]: I0126 15:32:20.811274 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/762e202a-0798-4723-af7e-f0227cea1aba-utilities" (OuterVolumeSpecName: "utilities") pod "762e202a-0798-4723-af7e-f0227cea1aba" (UID: "762e202a-0798-4723-af7e-f0227cea1aba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:32:20 crc kubenswrapper[4823]: I0126 15:32:20.818506 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/762e202a-0798-4723-af7e-f0227cea1aba-kube-api-access-rjl9b" (OuterVolumeSpecName: "kube-api-access-rjl9b") pod "762e202a-0798-4723-af7e-f0227cea1aba" (UID: "762e202a-0798-4723-af7e-f0227cea1aba"). InnerVolumeSpecName "kube-api-access-rjl9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:32:20 crc kubenswrapper[4823]: I0126 15:32:20.913025 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/762e202a-0798-4723-af7e-f0227cea1aba-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:20 crc kubenswrapper[4823]: I0126 15:32:20.913071 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjl9b\" (UniqueName: \"kubernetes.io/projected/762e202a-0798-4723-af7e-f0227cea1aba-kube-api-access-rjl9b\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:20 crc kubenswrapper[4823]: I0126 15:32:20.922597 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/762e202a-0798-4723-af7e-f0227cea1aba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "762e202a-0798-4723-af7e-f0227cea1aba" (UID: "762e202a-0798-4723-af7e-f0227cea1aba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.014654 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/762e202a-0798-4723-af7e-f0227cea1aba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.167335 4823 generic.go:334] "Generic (PLEG): container finished" podID="762e202a-0798-4723-af7e-f0227cea1aba" containerID="c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201" exitCode=0 Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.167408 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvnfn" event={"ID":"762e202a-0798-4723-af7e-f0227cea1aba","Type":"ContainerDied","Data":"c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201"} Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.167708 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rvnfn" event={"ID":"762e202a-0798-4723-af7e-f0227cea1aba","Type":"ContainerDied","Data":"718a5e6cfecc7d862871841232975f299e6043aed4e85ec63ff28233ba1bf77a"} Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.167727 4823 scope.go:117] "RemoveContainer" containerID="c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201" Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.167862 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rvnfn" Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.197251 4823 scope.go:117] "RemoveContainer" containerID="342cf06370ec6c9ac37de32a5947b601c490407291754d5adaee6d1724cd9f38" Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.208725 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rvnfn"] Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.215583 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rvnfn"] Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.236381 4823 scope.go:117] "RemoveContainer" containerID="c7fdc77f9ed9672a7f7f8f83c7b4ec168f84cff3c4a5cc4bcaf7424216c84a42" Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.276574 4823 scope.go:117] "RemoveContainer" containerID="c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201" Jan 26 15:32:21 crc kubenswrapper[4823]: E0126 15:32:21.277094 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201\": container with ID starting with c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201 not found: ID does not exist" containerID="c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201" Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.277133 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201"} err="failed to get container status \"c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201\": rpc error: code = NotFound desc = could not find container \"c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201\": container with ID starting with c65563b93afb55ae552baf475abb50d2c6354a98255681f82f1fa0999d8dd201 not found: ID does not exist" Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.277161 4823 scope.go:117] "RemoveContainer" containerID="342cf06370ec6c9ac37de32a5947b601c490407291754d5adaee6d1724cd9f38" Jan 26 15:32:21 crc kubenswrapper[4823]: E0126 15:32:21.277639 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"342cf06370ec6c9ac37de32a5947b601c490407291754d5adaee6d1724cd9f38\": container with ID starting with 342cf06370ec6c9ac37de32a5947b601c490407291754d5adaee6d1724cd9f38 not found: ID does not exist" containerID="342cf06370ec6c9ac37de32a5947b601c490407291754d5adaee6d1724cd9f38" Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.277667 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"342cf06370ec6c9ac37de32a5947b601c490407291754d5adaee6d1724cd9f38"} err="failed to get container status \"342cf06370ec6c9ac37de32a5947b601c490407291754d5adaee6d1724cd9f38\": rpc error: code = NotFound desc = could not find container \"342cf06370ec6c9ac37de32a5947b601c490407291754d5adaee6d1724cd9f38\": container with ID starting with 342cf06370ec6c9ac37de32a5947b601c490407291754d5adaee6d1724cd9f38 not found: ID does not exist" Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.277683 4823 scope.go:117] "RemoveContainer" containerID="c7fdc77f9ed9672a7f7f8f83c7b4ec168f84cff3c4a5cc4bcaf7424216c84a42" Jan 26 15:32:21 crc kubenswrapper[4823]: E0126 15:32:21.277987 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7fdc77f9ed9672a7f7f8f83c7b4ec168f84cff3c4a5cc4bcaf7424216c84a42\": container with ID starting with c7fdc77f9ed9672a7f7f8f83c7b4ec168f84cff3c4a5cc4bcaf7424216c84a42 not found: ID does not exist" containerID="c7fdc77f9ed9672a7f7f8f83c7b4ec168f84cff3c4a5cc4bcaf7424216c84a42" Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.278010 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fdc77f9ed9672a7f7f8f83c7b4ec168f84cff3c4a5cc4bcaf7424216c84a42"} err="failed to get container status \"c7fdc77f9ed9672a7f7f8f83c7b4ec168f84cff3c4a5cc4bcaf7424216c84a42\": rpc error: code = NotFound desc = could not find container \"c7fdc77f9ed9672a7f7f8f83c7b4ec168f84cff3c4a5cc4bcaf7424216c84a42\": container with ID starting with c7fdc77f9ed9672a7f7f8f83c7b4ec168f84cff3c4a5cc4bcaf7424216c84a42 not found: ID does not exist" Jan 26 15:32:21 crc kubenswrapper[4823]: I0126 15:32:21.571879 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="762e202a-0798-4723-af7e-f0227cea1aba" path="/var/lib/kubelet/pods/762e202a-0798-4723-af7e-f0227cea1aba/volumes" Jan 26 15:33:04 crc kubenswrapper[4823]: I0126 15:33:04.508462 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:33:04 crc kubenswrapper[4823]: I0126 15:33:04.509078 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:33:34 crc kubenswrapper[4823]: I0126 15:33:34.508810 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:33:34 crc kubenswrapper[4823]: I0126 15:33:34.509844 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.198156 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-njzfn"] Jan 26 15:33:45 crc kubenswrapper[4823]: E0126 15:33:45.199895 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762e202a-0798-4723-af7e-f0227cea1aba" containerName="registry-server" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.199915 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="762e202a-0798-4723-af7e-f0227cea1aba" containerName="registry-server" Jan 26 15:33:45 crc kubenswrapper[4823]: E0126 15:33:45.199933 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762e202a-0798-4723-af7e-f0227cea1aba" containerName="extract-content" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.199940 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="762e202a-0798-4723-af7e-f0227cea1aba" containerName="extract-content" Jan 26 15:33:45 crc kubenswrapper[4823]: E0126 15:33:45.199976 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="762e202a-0798-4723-af7e-f0227cea1aba" containerName="extract-utilities" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.199984 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="762e202a-0798-4723-af7e-f0227cea1aba" containerName="extract-utilities" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.200178 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="762e202a-0798-4723-af7e-f0227cea1aba" containerName="registry-server" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.201692 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.220300 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-njzfn"] Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.319459 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-utilities\") pod \"community-operators-njzfn\" (UID: \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\") " pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.319556 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l4r5\" (UniqueName: \"kubernetes.io/projected/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-kube-api-access-9l4r5\") pod \"community-operators-njzfn\" (UID: \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\") " pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.319623 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-catalog-content\") pod \"community-operators-njzfn\" (UID: \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\") " pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.421027 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l4r5\" (UniqueName: \"kubernetes.io/projected/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-kube-api-access-9l4r5\") pod \"community-operators-njzfn\" (UID: \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\") " pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.421127 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-catalog-content\") pod \"community-operators-njzfn\" (UID: \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\") " pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.421239 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-utilities\") pod \"community-operators-njzfn\" (UID: \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\") " pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.421822 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-catalog-content\") pod \"community-operators-njzfn\" (UID: \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\") " pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.421903 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-utilities\") pod \"community-operators-njzfn\" (UID: \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\") " pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.445659 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l4r5\" (UniqueName: \"kubernetes.io/projected/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-kube-api-access-9l4r5\") pod \"community-operators-njzfn\" (UID: \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\") " pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:45 crc kubenswrapper[4823]: I0126 15:33:45.522489 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:46 crc kubenswrapper[4823]: I0126 15:33:46.043212 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-njzfn"] Jan 26 15:33:46 crc kubenswrapper[4823]: I0126 15:33:46.981436 4823 generic.go:334] "Generic (PLEG): container finished" podID="1fcb2473-4991-4556-a5c4-dbb2e1f379a6" containerID="a6cda320b72a92ecf806c62d09e6f00094b9643c64164896edefbf9f74398669" exitCode=0 Jan 26 15:33:46 crc kubenswrapper[4823]: I0126 15:33:46.981493 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzfn" event={"ID":"1fcb2473-4991-4556-a5c4-dbb2e1f379a6","Type":"ContainerDied","Data":"a6cda320b72a92ecf806c62d09e6f00094b9643c64164896edefbf9f74398669"} Jan 26 15:33:46 crc kubenswrapper[4823]: I0126 15:33:46.981841 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzfn" event={"ID":"1fcb2473-4991-4556-a5c4-dbb2e1f379a6","Type":"ContainerStarted","Data":"479839d237dcf714f740f1a12758db94bd3ee6ec89a4d6ffb5e74d99e139b650"} Jan 26 15:33:48 crc kubenswrapper[4823]: I0126 15:33:48.006652 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzfn" event={"ID":"1fcb2473-4991-4556-a5c4-dbb2e1f379a6","Type":"ContainerStarted","Data":"4b9d4dceef174f5b7ed463742718f7423c6407d614c35d61ef04117d01f4ce23"} Jan 26 15:33:49 crc kubenswrapper[4823]: I0126 15:33:49.018827 4823 generic.go:334] "Generic (PLEG): container finished" podID="1fcb2473-4991-4556-a5c4-dbb2e1f379a6" containerID="4b9d4dceef174f5b7ed463742718f7423c6407d614c35d61ef04117d01f4ce23" exitCode=0 Jan 26 15:33:49 crc kubenswrapper[4823]: I0126 15:33:49.018876 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzfn" event={"ID":"1fcb2473-4991-4556-a5c4-dbb2e1f379a6","Type":"ContainerDied","Data":"4b9d4dceef174f5b7ed463742718f7423c6407d614c35d61ef04117d01f4ce23"} Jan 26 15:33:50 crc kubenswrapper[4823]: I0126 15:33:50.030831 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzfn" event={"ID":"1fcb2473-4991-4556-a5c4-dbb2e1f379a6","Type":"ContainerStarted","Data":"39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951"} Jan 26 15:33:50 crc kubenswrapper[4823]: I0126 15:33:50.061452 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-njzfn" podStartSLOduration=2.621911398 podStartE2EDuration="5.061429033s" podCreationTimestamp="2026-01-26 15:33:45 +0000 UTC" firstStartedPulling="2026-01-26 15:33:46.983607718 +0000 UTC m=+2823.669070823" lastFinishedPulling="2026-01-26 15:33:49.423125353 +0000 UTC m=+2826.108588458" observedRunningTime="2026-01-26 15:33:50.052469492 +0000 UTC m=+2826.737932637" watchObservedRunningTime="2026-01-26 15:33:50.061429033 +0000 UTC m=+2826.746892158" Jan 26 15:33:55 crc kubenswrapper[4823]: I0126 15:33:55.522780 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:55 crc kubenswrapper[4823]: I0126 15:33:55.523343 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:55 crc kubenswrapper[4823]: I0126 15:33:55.569370 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:56 crc kubenswrapper[4823]: I0126 15:33:56.162785 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.484607 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6qjp7"] Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.486906 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.497324 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qjp7"] Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.655389 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba7befa-e680-4b05-9799-d46b75b4ada7-utilities\") pod \"certified-operators-6qjp7\" (UID: \"bba7befa-e680-4b05-9799-d46b75b4ada7\") " pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.655454 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk44s\" (UniqueName: \"kubernetes.io/projected/bba7befa-e680-4b05-9799-d46b75b4ada7-kube-api-access-xk44s\") pod \"certified-operators-6qjp7\" (UID: \"bba7befa-e680-4b05-9799-d46b75b4ada7\") " pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.655663 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba7befa-e680-4b05-9799-d46b75b4ada7-catalog-content\") pod \"certified-operators-6qjp7\" (UID: \"bba7befa-e680-4b05-9799-d46b75b4ada7\") " pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.757591 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba7befa-e680-4b05-9799-d46b75b4ada7-utilities\") pod \"certified-operators-6qjp7\" (UID: \"bba7befa-e680-4b05-9799-d46b75b4ada7\") " pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.757656 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk44s\" (UniqueName: \"kubernetes.io/projected/bba7befa-e680-4b05-9799-d46b75b4ada7-kube-api-access-xk44s\") pod \"certified-operators-6qjp7\" (UID: \"bba7befa-e680-4b05-9799-d46b75b4ada7\") " pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.757690 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba7befa-e680-4b05-9799-d46b75b4ada7-catalog-content\") pod \"certified-operators-6qjp7\" (UID: \"bba7befa-e680-4b05-9799-d46b75b4ada7\") " pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.758084 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba7befa-e680-4b05-9799-d46b75b4ada7-utilities\") pod \"certified-operators-6qjp7\" (UID: \"bba7befa-e680-4b05-9799-d46b75b4ada7\") " pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.758091 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba7befa-e680-4b05-9799-d46b75b4ada7-catalog-content\") pod \"certified-operators-6qjp7\" (UID: \"bba7befa-e680-4b05-9799-d46b75b4ada7\") " pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.781484 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk44s\" (UniqueName: \"kubernetes.io/projected/bba7befa-e680-4b05-9799-d46b75b4ada7-kube-api-access-xk44s\") pod \"certified-operators-6qjp7\" (UID: \"bba7befa-e680-4b05-9799-d46b75b4ada7\") " pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:33:57 crc kubenswrapper[4823]: I0126 15:33:57.804978 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:33:58 crc kubenswrapper[4823]: I0126 15:33:58.285125 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6qjp7"] Jan 26 15:33:59 crc kubenswrapper[4823]: I0126 15:33:59.119659 4823 generic.go:334] "Generic (PLEG): container finished" podID="bba7befa-e680-4b05-9799-d46b75b4ada7" containerID="76c921e4708ab9e049fd82f926e70be3ad3d47ff3d34bd511b6076d10b73c6e2" exitCode=0 Jan 26 15:33:59 crc kubenswrapper[4823]: I0126 15:33:59.119733 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qjp7" event={"ID":"bba7befa-e680-4b05-9799-d46b75b4ada7","Type":"ContainerDied","Data":"76c921e4708ab9e049fd82f926e70be3ad3d47ff3d34bd511b6076d10b73c6e2"} Jan 26 15:33:59 crc kubenswrapper[4823]: I0126 15:33:59.119931 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qjp7" event={"ID":"bba7befa-e680-4b05-9799-d46b75b4ada7","Type":"ContainerStarted","Data":"a53ac675b9440ecb3402de5be6d90c76197497bd0f22719ba83105673ff6b738"} Jan 26 15:33:59 crc kubenswrapper[4823]: I0126 15:33:59.478181 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-njzfn"] Jan 26 15:33:59 crc kubenswrapper[4823]: I0126 15:33:59.478733 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-njzfn" podUID="1fcb2473-4991-4556-a5c4-dbb2e1f379a6" containerName="registry-server" containerID="cri-o://39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951" gracePeriod=2 Jan 26 15:33:59 crc kubenswrapper[4823]: I0126 15:33:59.973806 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.099722 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l4r5\" (UniqueName: \"kubernetes.io/projected/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-kube-api-access-9l4r5\") pod \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\" (UID: \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\") " Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.099803 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-catalog-content\") pod \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\" (UID: \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\") " Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.099848 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-utilities\") pod \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\" (UID: \"1fcb2473-4991-4556-a5c4-dbb2e1f379a6\") " Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.101406 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-utilities" (OuterVolumeSpecName: "utilities") pod "1fcb2473-4991-4556-a5c4-dbb2e1f379a6" (UID: "1fcb2473-4991-4556-a5c4-dbb2e1f379a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.101667 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.116588 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-kube-api-access-9l4r5" (OuterVolumeSpecName: "kube-api-access-9l4r5") pod "1fcb2473-4991-4556-a5c4-dbb2e1f379a6" (UID: "1fcb2473-4991-4556-a5c4-dbb2e1f379a6"). InnerVolumeSpecName "kube-api-access-9l4r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.130024 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qjp7" event={"ID":"bba7befa-e680-4b05-9799-d46b75b4ada7","Type":"ContainerStarted","Data":"0e4ddfb66300b18df14b45c427f91e88255c5fe926039758bc23886f5fd0970e"} Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.133976 4823 generic.go:334] "Generic (PLEG): container finished" podID="1fcb2473-4991-4556-a5c4-dbb2e1f379a6" containerID="39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951" exitCode=0 Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.134025 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzfn" event={"ID":"1fcb2473-4991-4556-a5c4-dbb2e1f379a6","Type":"ContainerDied","Data":"39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951"} Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.134054 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-njzfn" event={"ID":"1fcb2473-4991-4556-a5c4-dbb2e1f379a6","Type":"ContainerDied","Data":"479839d237dcf714f740f1a12758db94bd3ee6ec89a4d6ffb5e74d99e139b650"} Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.134077 4823 scope.go:117] "RemoveContainer" containerID="39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.134201 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-njzfn" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.152243 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fcb2473-4991-4556-a5c4-dbb2e1f379a6" (UID: "1fcb2473-4991-4556-a5c4-dbb2e1f379a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.203223 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l4r5\" (UniqueName: \"kubernetes.io/projected/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-kube-api-access-9l4r5\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.203256 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fcb2473-4991-4556-a5c4-dbb2e1f379a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.230733 4823 scope.go:117] "RemoveContainer" containerID="4b9d4dceef174f5b7ed463742718f7423c6407d614c35d61ef04117d01f4ce23" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.248640 4823 scope.go:117] "RemoveContainer" containerID="a6cda320b72a92ecf806c62d09e6f00094b9643c64164896edefbf9f74398669" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.263941 4823 scope.go:117] "RemoveContainer" containerID="39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951" Jan 26 15:34:00 crc kubenswrapper[4823]: E0126 15:34:00.264447 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951\": container with ID starting with 39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951 not found: ID does not exist" containerID="39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.264501 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951"} err="failed to get container status \"39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951\": rpc error: code = NotFound desc = could not find container \"39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951\": container with ID starting with 39ccd6e3f47c67babfc7d72349a0fe3687a4ff42dd739f05b51141cb85ece951 not found: ID does not exist" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.264535 4823 scope.go:117] "RemoveContainer" containerID="4b9d4dceef174f5b7ed463742718f7423c6407d614c35d61ef04117d01f4ce23" Jan 26 15:34:00 crc kubenswrapper[4823]: E0126 15:34:00.264919 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b9d4dceef174f5b7ed463742718f7423c6407d614c35d61ef04117d01f4ce23\": container with ID starting with 4b9d4dceef174f5b7ed463742718f7423c6407d614c35d61ef04117d01f4ce23 not found: ID does not exist" containerID="4b9d4dceef174f5b7ed463742718f7423c6407d614c35d61ef04117d01f4ce23" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.264947 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9d4dceef174f5b7ed463742718f7423c6407d614c35d61ef04117d01f4ce23"} err="failed to get container status \"4b9d4dceef174f5b7ed463742718f7423c6407d614c35d61ef04117d01f4ce23\": rpc error: code = NotFound desc = could not find container \"4b9d4dceef174f5b7ed463742718f7423c6407d614c35d61ef04117d01f4ce23\": container with ID starting with 4b9d4dceef174f5b7ed463742718f7423c6407d614c35d61ef04117d01f4ce23 not found: ID does not exist" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.264967 4823 scope.go:117] "RemoveContainer" containerID="a6cda320b72a92ecf806c62d09e6f00094b9643c64164896edefbf9f74398669" Jan 26 15:34:00 crc kubenswrapper[4823]: E0126 15:34:00.265253 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6cda320b72a92ecf806c62d09e6f00094b9643c64164896edefbf9f74398669\": container with ID starting with a6cda320b72a92ecf806c62d09e6f00094b9643c64164896edefbf9f74398669 not found: ID does not exist" containerID="a6cda320b72a92ecf806c62d09e6f00094b9643c64164896edefbf9f74398669" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.265278 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6cda320b72a92ecf806c62d09e6f00094b9643c64164896edefbf9f74398669"} err="failed to get container status \"a6cda320b72a92ecf806c62d09e6f00094b9643c64164896edefbf9f74398669\": rpc error: code = NotFound desc = could not find container \"a6cda320b72a92ecf806c62d09e6f00094b9643c64164896edefbf9f74398669\": container with ID starting with a6cda320b72a92ecf806c62d09e6f00094b9643c64164896edefbf9f74398669 not found: ID does not exist" Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.471275 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-njzfn"] Jan 26 15:34:00 crc kubenswrapper[4823]: I0126 15:34:00.493190 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-njzfn"] Jan 26 15:34:01 crc kubenswrapper[4823]: I0126 15:34:01.145818 4823 generic.go:334] "Generic (PLEG): container finished" podID="bba7befa-e680-4b05-9799-d46b75b4ada7" containerID="0e4ddfb66300b18df14b45c427f91e88255c5fe926039758bc23886f5fd0970e" exitCode=0 Jan 26 15:34:01 crc kubenswrapper[4823]: I0126 15:34:01.145869 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qjp7" event={"ID":"bba7befa-e680-4b05-9799-d46b75b4ada7","Type":"ContainerDied","Data":"0e4ddfb66300b18df14b45c427f91e88255c5fe926039758bc23886f5fd0970e"} Jan 26 15:34:01 crc kubenswrapper[4823]: I0126 15:34:01.581316 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fcb2473-4991-4556-a5c4-dbb2e1f379a6" path="/var/lib/kubelet/pods/1fcb2473-4991-4556-a5c4-dbb2e1f379a6/volumes" Jan 26 15:34:02 crc kubenswrapper[4823]: I0126 15:34:02.155328 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qjp7" event={"ID":"bba7befa-e680-4b05-9799-d46b75b4ada7","Type":"ContainerStarted","Data":"980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3"} Jan 26 15:34:02 crc kubenswrapper[4823]: I0126 15:34:02.174217 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6qjp7" podStartSLOduration=2.743890835 podStartE2EDuration="5.174193662s" podCreationTimestamp="2026-01-26 15:33:57 +0000 UTC" firstStartedPulling="2026-01-26 15:33:59.121643078 +0000 UTC m=+2835.807106183" lastFinishedPulling="2026-01-26 15:34:01.551945905 +0000 UTC m=+2838.237409010" observedRunningTime="2026-01-26 15:34:02.16963116 +0000 UTC m=+2838.855094275" watchObservedRunningTime="2026-01-26 15:34:02.174193662 +0000 UTC m=+2838.859656767" Jan 26 15:34:04 crc kubenswrapper[4823]: I0126 15:34:04.508669 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:34:04 crc kubenswrapper[4823]: I0126 15:34:04.509448 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:34:04 crc kubenswrapper[4823]: I0126 15:34:04.509546 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 15:34:04 crc kubenswrapper[4823]: I0126 15:34:04.510987 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"451190c06cb1a40bf0bb818365234b55e1c3a1335c546ecb76fd72050c9e629f"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:34:04 crc kubenswrapper[4823]: I0126 15:34:04.511140 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://451190c06cb1a40bf0bb818365234b55e1c3a1335c546ecb76fd72050c9e629f" gracePeriod=600 Jan 26 15:34:05 crc kubenswrapper[4823]: I0126 15:34:05.201673 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="451190c06cb1a40bf0bb818365234b55e1c3a1335c546ecb76fd72050c9e629f" exitCode=0 Jan 26 15:34:05 crc kubenswrapper[4823]: I0126 15:34:05.201735 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"451190c06cb1a40bf0bb818365234b55e1c3a1335c546ecb76fd72050c9e629f"} Jan 26 15:34:05 crc kubenswrapper[4823]: I0126 15:34:05.201781 4823 scope.go:117] "RemoveContainer" containerID="99f9e3ed8935a8d5dfaf5737554c1c1d02059ffce37f507b7e22fac9d4173172" Jan 26 15:34:06 crc kubenswrapper[4823]: I0126 15:34:06.217588 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58"} Jan 26 15:34:07 crc kubenswrapper[4823]: I0126 15:34:07.805910 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:34:07 crc kubenswrapper[4823]: I0126 15:34:07.806268 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:34:07 crc kubenswrapper[4823]: I0126 15:34:07.856512 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:34:08 crc kubenswrapper[4823]: I0126 15:34:08.302283 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:34:08 crc kubenswrapper[4823]: I0126 15:34:08.373595 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qjp7"] Jan 26 15:34:10 crc kubenswrapper[4823]: I0126 15:34:10.250751 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6qjp7" podUID="bba7befa-e680-4b05-9799-d46b75b4ada7" containerName="registry-server" containerID="cri-o://980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3" gracePeriod=2 Jan 26 15:34:10 crc kubenswrapper[4823]: I0126 15:34:10.750794 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:34:10 crc kubenswrapper[4823]: I0126 15:34:10.816186 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk44s\" (UniqueName: \"kubernetes.io/projected/bba7befa-e680-4b05-9799-d46b75b4ada7-kube-api-access-xk44s\") pod \"bba7befa-e680-4b05-9799-d46b75b4ada7\" (UID: \"bba7befa-e680-4b05-9799-d46b75b4ada7\") " Jan 26 15:34:10 crc kubenswrapper[4823]: I0126 15:34:10.816473 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba7befa-e680-4b05-9799-d46b75b4ada7-catalog-content\") pod \"bba7befa-e680-4b05-9799-d46b75b4ada7\" (UID: \"bba7befa-e680-4b05-9799-d46b75b4ada7\") " Jan 26 15:34:10 crc kubenswrapper[4823]: I0126 15:34:10.816666 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba7befa-e680-4b05-9799-d46b75b4ada7-utilities\") pod \"bba7befa-e680-4b05-9799-d46b75b4ada7\" (UID: \"bba7befa-e680-4b05-9799-d46b75b4ada7\") " Jan 26 15:34:10 crc kubenswrapper[4823]: I0126 15:34:10.817645 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba7befa-e680-4b05-9799-d46b75b4ada7-utilities" (OuterVolumeSpecName: "utilities") pod "bba7befa-e680-4b05-9799-d46b75b4ada7" (UID: "bba7befa-e680-4b05-9799-d46b75b4ada7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:10 crc kubenswrapper[4823]: I0126 15:34:10.822866 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bba7befa-e680-4b05-9799-d46b75b4ada7-kube-api-access-xk44s" (OuterVolumeSpecName: "kube-api-access-xk44s") pod "bba7befa-e680-4b05-9799-d46b75b4ada7" (UID: "bba7befa-e680-4b05-9799-d46b75b4ada7"). InnerVolumeSpecName "kube-api-access-xk44s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:34:10 crc kubenswrapper[4823]: I0126 15:34:10.867392 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba7befa-e680-4b05-9799-d46b75b4ada7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bba7befa-e680-4b05-9799-d46b75b4ada7" (UID: "bba7befa-e680-4b05-9799-d46b75b4ada7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:34:10 crc kubenswrapper[4823]: I0126 15:34:10.919012 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk44s\" (UniqueName: \"kubernetes.io/projected/bba7befa-e680-4b05-9799-d46b75b4ada7-kube-api-access-xk44s\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:10 crc kubenswrapper[4823]: I0126 15:34:10.919048 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba7befa-e680-4b05-9799-d46b75b4ada7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:10 crc kubenswrapper[4823]: I0126 15:34:10.919059 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba7befa-e680-4b05-9799-d46b75b4ada7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.262428 4823 generic.go:334] "Generic (PLEG): container finished" podID="bba7befa-e680-4b05-9799-d46b75b4ada7" containerID="980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3" exitCode=0 Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.262512 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6qjp7" Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.262492 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qjp7" event={"ID":"bba7befa-e680-4b05-9799-d46b75b4ada7","Type":"ContainerDied","Data":"980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3"} Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.263872 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6qjp7" event={"ID":"bba7befa-e680-4b05-9799-d46b75b4ada7","Type":"ContainerDied","Data":"a53ac675b9440ecb3402de5be6d90c76197497bd0f22719ba83105673ff6b738"} Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.263898 4823 scope.go:117] "RemoveContainer" containerID="980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3" Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.284339 4823 scope.go:117] "RemoveContainer" containerID="0e4ddfb66300b18df14b45c427f91e88255c5fe926039758bc23886f5fd0970e" Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.299282 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6qjp7"] Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.309445 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6qjp7"] Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.321935 4823 scope.go:117] "RemoveContainer" containerID="76c921e4708ab9e049fd82f926e70be3ad3d47ff3d34bd511b6076d10b73c6e2" Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.347025 4823 scope.go:117] "RemoveContainer" containerID="980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3" Jan 26 15:34:11 crc kubenswrapper[4823]: E0126 15:34:11.347482 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3\": container with ID starting with 980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3 not found: ID does not exist" containerID="980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3" Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.347611 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3"} err="failed to get container status \"980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3\": rpc error: code = NotFound desc = could not find container \"980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3\": container with ID starting with 980ce4847eafa646fbfab9cec92acaeddd80d7b502d03e1bebf1066fc57434d3 not found: ID does not exist" Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.347736 4823 scope.go:117] "RemoveContainer" containerID="0e4ddfb66300b18df14b45c427f91e88255c5fe926039758bc23886f5fd0970e" Jan 26 15:34:11 crc kubenswrapper[4823]: E0126 15:34:11.348199 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e4ddfb66300b18df14b45c427f91e88255c5fe926039758bc23886f5fd0970e\": container with ID starting with 0e4ddfb66300b18df14b45c427f91e88255c5fe926039758bc23886f5fd0970e not found: ID does not exist" containerID="0e4ddfb66300b18df14b45c427f91e88255c5fe926039758bc23886f5fd0970e" Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.348291 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e4ddfb66300b18df14b45c427f91e88255c5fe926039758bc23886f5fd0970e"} err="failed to get container status \"0e4ddfb66300b18df14b45c427f91e88255c5fe926039758bc23886f5fd0970e\": rpc error: code = NotFound desc = could not find container \"0e4ddfb66300b18df14b45c427f91e88255c5fe926039758bc23886f5fd0970e\": container with ID starting with 0e4ddfb66300b18df14b45c427f91e88255c5fe926039758bc23886f5fd0970e not found: ID does not exist" Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.348381 4823 scope.go:117] "RemoveContainer" containerID="76c921e4708ab9e049fd82f926e70be3ad3d47ff3d34bd511b6076d10b73c6e2" Jan 26 15:34:11 crc kubenswrapper[4823]: E0126 15:34:11.348846 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76c921e4708ab9e049fd82f926e70be3ad3d47ff3d34bd511b6076d10b73c6e2\": container with ID starting with 76c921e4708ab9e049fd82f926e70be3ad3d47ff3d34bd511b6076d10b73c6e2 not found: ID does not exist" containerID="76c921e4708ab9e049fd82f926e70be3ad3d47ff3d34bd511b6076d10b73c6e2" Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.348891 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c921e4708ab9e049fd82f926e70be3ad3d47ff3d34bd511b6076d10b73c6e2"} err="failed to get container status \"76c921e4708ab9e049fd82f926e70be3ad3d47ff3d34bd511b6076d10b73c6e2\": rpc error: code = NotFound desc = could not find container \"76c921e4708ab9e049fd82f926e70be3ad3d47ff3d34bd511b6076d10b73c6e2\": container with ID starting with 76c921e4708ab9e049fd82f926e70be3ad3d47ff3d34bd511b6076d10b73c6e2 not found: ID does not exist" Jan 26 15:34:11 crc kubenswrapper[4823]: I0126 15:34:11.569433 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bba7befa-e680-4b05-9799-d46b75b4ada7" path="/var/lib/kubelet/pods/bba7befa-e680-4b05-9799-d46b75b4ada7/volumes" Jan 26 15:36:30 crc kubenswrapper[4823]: I0126 15:36:30.178379 4823 generic.go:334] "Generic (PLEG): container finished" podID="b2993a3c-5b24-475d-b1cf-38d4611f55fa" containerID="6d9fde1eae58e8cab5de2142427010169357082556c206201f95e7154b151bd5" exitCode=0 Jan 26 15:36:30 crc kubenswrapper[4823]: I0126 15:36:30.178512 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" event={"ID":"b2993a3c-5b24-475d-b1cf-38d4611f55fa","Type":"ContainerDied","Data":"6d9fde1eae58e8cab5de2142427010169357082556c206201f95e7154b151bd5"} Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.545563 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.621034 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-ceph\") pod \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.621090 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-libvirt-secret-0\") pod \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.621182 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-ssh-key-openstack-edpm-ipam\") pod \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.621262 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-libvirt-combined-ca-bundle\") pod \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.621413 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vk8l\" (UniqueName: \"kubernetes.io/projected/b2993a3c-5b24-475d-b1cf-38d4611f55fa-kube-api-access-7vk8l\") pod \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.621456 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-inventory\") pod \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\" (UID: \"b2993a3c-5b24-475d-b1cf-38d4611f55fa\") " Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.626557 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b2993a3c-5b24-475d-b1cf-38d4611f55fa" (UID: "b2993a3c-5b24-475d-b1cf-38d4611f55fa"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.627352 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2993a3c-5b24-475d-b1cf-38d4611f55fa-kube-api-access-7vk8l" (OuterVolumeSpecName: "kube-api-access-7vk8l") pod "b2993a3c-5b24-475d-b1cf-38d4611f55fa" (UID: "b2993a3c-5b24-475d-b1cf-38d4611f55fa"). InnerVolumeSpecName "kube-api-access-7vk8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.628115 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-ceph" (OuterVolumeSpecName: "ceph") pod "b2993a3c-5b24-475d-b1cf-38d4611f55fa" (UID: "b2993a3c-5b24-475d-b1cf-38d4611f55fa"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.648814 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-inventory" (OuterVolumeSpecName: "inventory") pod "b2993a3c-5b24-475d-b1cf-38d4611f55fa" (UID: "b2993a3c-5b24-475d-b1cf-38d4611f55fa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.650626 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b2993a3c-5b24-475d-b1cf-38d4611f55fa" (UID: "b2993a3c-5b24-475d-b1cf-38d4611f55fa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.650830 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "b2993a3c-5b24-475d-b1cf-38d4611f55fa" (UID: "b2993a3c-5b24-475d-b1cf-38d4611f55fa"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.723631 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.723669 4823 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.723683 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.723695 4823 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.723707 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vk8l\" (UniqueName: \"kubernetes.io/projected/b2993a3c-5b24-475d-b1cf-38d4611f55fa-kube-api-access-7vk8l\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:31 crc kubenswrapper[4823]: I0126 15:36:31.723718 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2993a3c-5b24-475d-b1cf-38d4611f55fa-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.198529 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" event={"ID":"b2993a3c-5b24-475d-b1cf-38d4611f55fa","Type":"ContainerDied","Data":"ec9ab27d2d190735ebfb63f9f7d3d067b7c23a347073b72eaad1595960d0f605"} Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.199129 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec9ab27d2d190735ebfb63f9f7d3d067b7c23a347073b72eaad1595960d0f605" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.198604 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.289039 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx"] Jan 26 15:36:32 crc kubenswrapper[4823]: E0126 15:36:32.289393 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba7befa-e680-4b05-9799-d46b75b4ada7" containerName="extract-utilities" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.289405 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba7befa-e680-4b05-9799-d46b75b4ada7" containerName="extract-utilities" Jan 26 15:36:32 crc kubenswrapper[4823]: E0126 15:36:32.289414 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fcb2473-4991-4556-a5c4-dbb2e1f379a6" containerName="extract-utilities" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.289421 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fcb2473-4991-4556-a5c4-dbb2e1f379a6" containerName="extract-utilities" Jan 26 15:36:32 crc kubenswrapper[4823]: E0126 15:36:32.289445 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fcb2473-4991-4556-a5c4-dbb2e1f379a6" containerName="extract-content" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.289452 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fcb2473-4991-4556-a5c4-dbb2e1f379a6" containerName="extract-content" Jan 26 15:36:32 crc kubenswrapper[4823]: E0126 15:36:32.289462 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba7befa-e680-4b05-9799-d46b75b4ada7" containerName="registry-server" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.289468 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba7befa-e680-4b05-9799-d46b75b4ada7" containerName="registry-server" Jan 26 15:36:32 crc kubenswrapper[4823]: E0126 15:36:32.289477 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fcb2473-4991-4556-a5c4-dbb2e1f379a6" containerName="registry-server" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.289482 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fcb2473-4991-4556-a5c4-dbb2e1f379a6" containerName="registry-server" Jan 26 15:36:32 crc kubenswrapper[4823]: E0126 15:36:32.289494 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba7befa-e680-4b05-9799-d46b75b4ada7" containerName="extract-content" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.289500 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba7befa-e680-4b05-9799-d46b75b4ada7" containerName="extract-content" Jan 26 15:36:32 crc kubenswrapper[4823]: E0126 15:36:32.289506 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2993a3c-5b24-475d-b1cf-38d4611f55fa" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.289514 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2993a3c-5b24-475d-b1cf-38d4611f55fa" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.289701 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba7befa-e680-4b05-9799-d46b75b4ada7" containerName="registry-server" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.289714 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2993a3c-5b24-475d-b1cf-38d4611f55fa" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.289722 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fcb2473-4991-4556-a5c4-dbb2e1f379a6" containerName="registry-server" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.290338 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.292297 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.295046 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.295055 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.295098 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.295295 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kdv4m" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.295376 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.297993 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.298021 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.298224 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.312172 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx"] Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.334939 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.334983 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.335009 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.335051 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.335105 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.335139 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.335173 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.335194 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fs7d\" (UniqueName: \"kubernetes.io/projected/df6f1f36-070b-46b4-af52-c113c5f3c5c8-kube-api-access-6fs7d\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.335233 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.335255 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.335274 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.436900 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.436960 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.436982 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.437011 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.437031 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.437055 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.437104 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.437162 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.437230 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.437926 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.437956 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fs7d\" (UniqueName: \"kubernetes.io/projected/df6f1f36-070b-46b4-af52-c113c5f3c5c8-kube-api-access-6fs7d\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.438276 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.438272 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.441928 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.442484 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.442969 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.444420 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.444889 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ssh-key-openstack-edpm-ipam\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.445678 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.448399 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.448787 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.455709 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fs7d\" (UniqueName: \"kubernetes.io/projected/df6f1f36-070b-46b4-af52-c113c5f3c5c8-kube-api-access-6fs7d\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:32 crc kubenswrapper[4823]: I0126 15:36:32.608224 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:36:33 crc kubenswrapper[4823]: I0126 15:36:33.134803 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx"] Jan 26 15:36:33 crc kubenswrapper[4823]: I0126 15:36:33.205758 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" event={"ID":"df6f1f36-070b-46b4-af52-c113c5f3c5c8","Type":"ContainerStarted","Data":"086b0617000ffb4595c25c4e3d30ee9def8bacb2a64d235554e7e77bdce00cdd"} Jan 26 15:36:34 crc kubenswrapper[4823]: I0126 15:36:34.218243 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" event={"ID":"df6f1f36-070b-46b4-af52-c113c5f3c5c8","Type":"ContainerStarted","Data":"f0abac9ade73c8b42ff58fcd35150529bf3f6ead0d78cac07e1715f684eb0b0f"} Jan 26 15:36:34 crc kubenswrapper[4823]: I0126 15:36:34.254075 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" podStartSLOduration=1.810673406 podStartE2EDuration="2.254061043s" podCreationTimestamp="2026-01-26 15:36:32 +0000 UTC" firstStartedPulling="2026-01-26 15:36:33.135683696 +0000 UTC m=+2989.821146801" lastFinishedPulling="2026-01-26 15:36:33.579071323 +0000 UTC m=+2990.264534438" observedRunningTime="2026-01-26 15:36:34.25212946 +0000 UTC m=+2990.937592585" watchObservedRunningTime="2026-01-26 15:36:34.254061043 +0000 UTC m=+2990.939524148" Jan 26 15:36:34 crc kubenswrapper[4823]: I0126 15:36:34.508573 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:36:34 crc kubenswrapper[4823]: I0126 15:36:34.508662 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:37:04 crc kubenswrapper[4823]: I0126 15:37:04.508711 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:37:04 crc kubenswrapper[4823]: I0126 15:37:04.509993 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:37:34 crc kubenswrapper[4823]: I0126 15:37:34.508515 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:37:34 crc kubenswrapper[4823]: I0126 15:37:34.509219 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:37:34 crc kubenswrapper[4823]: I0126 15:37:34.509275 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 15:37:34 crc kubenswrapper[4823]: I0126 15:37:34.510241 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:37:34 crc kubenswrapper[4823]: I0126 15:37:34.510331 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" gracePeriod=600 Jan 26 15:37:34 crc kubenswrapper[4823]: E0126 15:37:34.628532 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:37:34 crc kubenswrapper[4823]: I0126 15:37:34.738662 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" exitCode=0 Jan 26 15:37:34 crc kubenswrapper[4823]: I0126 15:37:34.738700 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58"} Jan 26 15:37:34 crc kubenswrapper[4823]: I0126 15:37:34.738769 4823 scope.go:117] "RemoveContainer" containerID="451190c06cb1a40bf0bb818365234b55e1c3a1335c546ecb76fd72050c9e629f" Jan 26 15:37:34 crc kubenswrapper[4823]: I0126 15:37:34.739384 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:37:34 crc kubenswrapper[4823]: E0126 15:37:34.739647 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:37:46 crc kubenswrapper[4823]: I0126 15:37:46.561468 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:37:46 crc kubenswrapper[4823]: E0126 15:37:46.562713 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:37:57 crc kubenswrapper[4823]: I0126 15:37:57.560410 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:37:57 crc kubenswrapper[4823]: E0126 15:37:57.561157 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:38:11 crc kubenswrapper[4823]: I0126 15:38:11.561845 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:38:11 crc kubenswrapper[4823]: E0126 15:38:11.562991 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:38:24 crc kubenswrapper[4823]: I0126 15:38:24.560156 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:38:24 crc kubenswrapper[4823]: E0126 15:38:24.560909 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:38:39 crc kubenswrapper[4823]: I0126 15:38:39.562185 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:38:39 crc kubenswrapper[4823]: E0126 15:38:39.563823 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:38:50 crc kubenswrapper[4823]: I0126 15:38:50.560955 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:38:50 crc kubenswrapper[4823]: E0126 15:38:50.562235 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:39:01 crc kubenswrapper[4823]: I0126 15:39:01.561301 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:39:01 crc kubenswrapper[4823]: E0126 15:39:01.561893 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:39:14 crc kubenswrapper[4823]: I0126 15:39:14.561250 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:39:14 crc kubenswrapper[4823]: E0126 15:39:14.562040 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:39:16 crc kubenswrapper[4823]: I0126 15:39:16.631140 4823 generic.go:334] "Generic (PLEG): container finished" podID="df6f1f36-070b-46b4-af52-c113c5f3c5c8" containerID="f0abac9ade73c8b42ff58fcd35150529bf3f6ead0d78cac07e1715f684eb0b0f" exitCode=0 Jan 26 15:39:16 crc kubenswrapper[4823]: I0126 15:39:16.631214 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" event={"ID":"df6f1f36-070b-46b4-af52-c113c5f3c5c8","Type":"ContainerDied","Data":"f0abac9ade73c8b42ff58fcd35150529bf3f6ead0d78cac07e1715f684eb0b0f"} Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.019947 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.160158 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ceph\") pod \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.160223 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-cell1-compute-config-0\") pod \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.160318 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ceph-nova-0\") pod \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.160335 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-migration-ssh-key-0\") pod \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.160405 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fs7d\" (UniqueName: \"kubernetes.io/projected/df6f1f36-070b-46b4-af52-c113c5f3c5c8-kube-api-access-6fs7d\") pod \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.160490 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-cell1-compute-config-1\") pod \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.160521 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ssh-key-openstack-edpm-ipam\") pod \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.160584 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-extra-config-0\") pod \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.160612 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-custom-ceph-combined-ca-bundle\") pod \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.160630 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-migration-ssh-key-1\") pod \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.160699 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-inventory\") pod \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\" (UID: \"df6f1f36-070b-46b4-af52-c113c5f3c5c8\") " Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.166262 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "df6f1f36-070b-46b4-af52-c113c5f3c5c8" (UID: "df6f1f36-070b-46b4-af52-c113c5f3c5c8"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.166347 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ceph" (OuterVolumeSpecName: "ceph") pod "df6f1f36-070b-46b4-af52-c113c5f3c5c8" (UID: "df6f1f36-070b-46b4-af52-c113c5f3c5c8"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.176663 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6f1f36-070b-46b4-af52-c113c5f3c5c8-kube-api-access-6fs7d" (OuterVolumeSpecName: "kube-api-access-6fs7d") pod "df6f1f36-070b-46b4-af52-c113c5f3c5c8" (UID: "df6f1f36-070b-46b4-af52-c113c5f3c5c8"). InnerVolumeSpecName "kube-api-access-6fs7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.190541 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "df6f1f36-070b-46b4-af52-c113c5f3c5c8" (UID: "df6f1f36-070b-46b4-af52-c113c5f3c5c8"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.191986 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "df6f1f36-070b-46b4-af52-c113c5f3c5c8" (UID: "df6f1f36-070b-46b4-af52-c113c5f3c5c8"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.193011 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "df6f1f36-070b-46b4-af52-c113c5f3c5c8" (UID: "df6f1f36-070b-46b4-af52-c113c5f3c5c8"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.193726 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "df6f1f36-070b-46b4-af52-c113c5f3c5c8" (UID: "df6f1f36-070b-46b4-af52-c113c5f3c5c8"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.193858 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "df6f1f36-070b-46b4-af52-c113c5f3c5c8" (UID: "df6f1f36-070b-46b4-af52-c113c5f3c5c8"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.196572 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-inventory" (OuterVolumeSpecName: "inventory") pod "df6f1f36-070b-46b4-af52-c113c5f3c5c8" (UID: "df6f1f36-070b-46b4-af52-c113c5f3c5c8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.203771 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "df6f1f36-070b-46b4-af52-c113c5f3c5c8" (UID: "df6f1f36-070b-46b4-af52-c113c5f3c5c8"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.205945 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "df6f1f36-070b-46b4-af52-c113c5f3c5c8" (UID: "df6f1f36-070b-46b4-af52-c113c5f3c5c8"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.264311 4823 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-inventory\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.264649 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.264664 4823 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.264681 4823 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.264694 4823 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.264705 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fs7d\" (UniqueName: \"kubernetes.io/projected/df6f1f36-070b-46b4-af52-c113c5f3c5c8-kube-api-access-6fs7d\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.264716 4823 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.264727 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.264738 4823 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.264749 4823 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.264760 4823 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/df6f1f36-070b-46b4-af52-c113c5f3c5c8-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.649820 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" event={"ID":"df6f1f36-070b-46b4-af52-c113c5f3c5c8","Type":"ContainerDied","Data":"086b0617000ffb4595c25c4e3d30ee9def8bacb2a64d235554e7e77bdce00cdd"} Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.649866 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="086b0617000ffb4595c25c4e3d30ee9def8bacb2a64d235554e7e77bdce00cdd" Jan 26 15:39:18 crc kubenswrapper[4823]: I0126 15:39:18.649865 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx" Jan 26 15:39:27 crc kubenswrapper[4823]: I0126 15:39:27.560584 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:39:27 crc kubenswrapper[4823]: E0126 15:39:27.561470 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.827576 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Jan 26 15:39:33 crc kubenswrapper[4823]: E0126 15:39:33.828441 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6f1f36-070b-46b4-af52-c113c5f3c5c8" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.828467 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6f1f36-070b-46b4-af52-c113c5f3c5c8" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.828672 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6f1f36-070b-46b4-af52-c113c5f3c5c8" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.829630 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.833972 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.843932 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.844098 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.845792 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.848227 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.856380 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.868118 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.949590 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.949646 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a3ab756b-769b-47fd-8ade-e462a900db55-ceph\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.949683 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.949710 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ab756b-769b-47fd-8ade-e462a900db55-config-data-custom\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.949729 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.949765 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ab756b-769b-47fd-8ade-e462a900db55-scripts\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.949888 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7kzf\" (UniqueName: \"kubernetes.io/projected/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-kube-api-access-n7kzf\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.949953 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950001 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950042 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ab756b-769b-47fd-8ade-e462a900db55-config-data\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950068 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950092 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-etc-nvme\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950108 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ab756b-769b-47fd-8ade-e462a900db55-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950220 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950285 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-lib-modules\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950328 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-run\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950347 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950388 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-run\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950410 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950445 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfn4c\" (UniqueName: \"kubernetes.io/projected/a3ab756b-769b-47fd-8ade-e462a900db55-kube-api-access-xfn4c\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950468 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-dev\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950522 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950585 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950633 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950662 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-sys\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950687 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950717 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950767 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950798 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950844 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950876 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:33 crc kubenswrapper[4823]: I0126 15:39:33.950928 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.052810 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ab756b-769b-47fd-8ade-e462a900db55-scripts\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.052887 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7kzf\" (UniqueName: \"kubernetes.io/projected/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-kube-api-access-n7kzf\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.052918 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.052940 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.052963 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ab756b-769b-47fd-8ade-e462a900db55-config-data\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.052989 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053016 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-etc-nvme\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053050 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ab756b-769b-47fd-8ade-e462a900db55-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053095 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053122 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-lib-modules\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053144 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-run\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053166 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053185 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-run\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053207 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053235 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfn4c\" (UniqueName: \"kubernetes.io/projected/a3ab756b-769b-47fd-8ade-e462a900db55-kube-api-access-xfn4c\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053256 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-dev\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053278 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053308 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053337 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053358 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-sys\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053400 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053420 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053446 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053472 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053505 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053533 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053568 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053606 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053629 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a3ab756b-769b-47fd-8ade-e462a900db55-ceph\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053656 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053681 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ab756b-769b-47fd-8ade-e462a900db55-config-data-custom\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053703 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053872 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-sys\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053935 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053969 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-lib-modules\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.053990 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-run\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.054015 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.054044 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-run\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.054129 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.054149 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.054392 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.054532 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.055595 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.055859 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-dev\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.055932 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.055962 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-etc-nvme\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.055981 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.056015 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.056040 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.056062 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-dev\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.056081 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-sys\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.056248 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3ab756b-769b-47fd-8ade-e462a900db55-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.059726 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.059859 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3ab756b-769b-47fd-8ade-e462a900db55-config-data\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.061540 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.063523 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3ab756b-769b-47fd-8ade-e462a900db55-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.063625 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.064192 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.066070 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3ab756b-769b-47fd-8ade-e462a900db55-config-data-custom\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.068902 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3ab756b-769b-47fd-8ade-e462a900db55-scripts\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.071638 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.077020 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7kzf\" (UniqueName: \"kubernetes.io/projected/b85d2c1d-42f6-4e32-a614-e8ddc9e888fa-kube-api-access-n7kzf\") pod \"cinder-volume-volume1-0\" (UID: \"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa\") " pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.079444 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfn4c\" (UniqueName: \"kubernetes.io/projected/a3ab756b-769b-47fd-8ade-e462a900db55-kube-api-access-xfn4c\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.082279 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a3ab756b-769b-47fd-8ade-e462a900db55-ceph\") pod \"cinder-backup-0\" (UID: \"a3ab756b-769b-47fd-8ade-e462a900db55\") " pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.151017 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.165711 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.198503 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-hc2s9"] Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.206892 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hc2s9" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.232145 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-hc2s9"] Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.364691 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a7ff82-985c-4819-997e-6624e6bdcffc-operator-scripts\") pod \"manila-db-create-hc2s9\" (UID: \"48a7ff82-985c-4819-997e-6624e6bdcffc\") " pod="openstack/manila-db-create-hc2s9" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.365060 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whj9t\" (UniqueName: \"kubernetes.io/projected/48a7ff82-985c-4819-997e-6624e6bdcffc-kube-api-access-whj9t\") pod \"manila-db-create-hc2s9\" (UID: \"48a7ff82-985c-4819-997e-6624e6bdcffc\") " pod="openstack/manila-db-create-hc2s9" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.367333 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-e4e2-account-create-update-rg4zc"] Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.368767 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-e4e2-account-create-update-rg4zc" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.377412 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.391245 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-e4e2-account-create-update-rg4zc"] Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.466690 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de86d52a-13e3-4228-99ab-3e47c27432f8-operator-scripts\") pod \"manila-e4e2-account-create-update-rg4zc\" (UID: \"de86d52a-13e3-4228-99ab-3e47c27432f8\") " pod="openstack/manila-e4e2-account-create-update-rg4zc" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.466768 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmwr2\" (UniqueName: \"kubernetes.io/projected/de86d52a-13e3-4228-99ab-3e47c27432f8-kube-api-access-kmwr2\") pod \"manila-e4e2-account-create-update-rg4zc\" (UID: \"de86d52a-13e3-4228-99ab-3e47c27432f8\") " pod="openstack/manila-e4e2-account-create-update-rg4zc" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.466878 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a7ff82-985c-4819-997e-6624e6bdcffc-operator-scripts\") pod \"manila-db-create-hc2s9\" (UID: \"48a7ff82-985c-4819-997e-6624e6bdcffc\") " pod="openstack/manila-db-create-hc2s9" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.466917 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whj9t\" (UniqueName: \"kubernetes.io/projected/48a7ff82-985c-4819-997e-6624e6bdcffc-kube-api-access-whj9t\") pod \"manila-db-create-hc2s9\" (UID: \"48a7ff82-985c-4819-997e-6624e6bdcffc\") " pod="openstack/manila-db-create-hc2s9" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.467756 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a7ff82-985c-4819-997e-6624e6bdcffc-operator-scripts\") pod \"manila-db-create-hc2s9\" (UID: \"48a7ff82-985c-4819-997e-6624e6bdcffc\") " pod="openstack/manila-db-create-hc2s9" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.503957 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whj9t\" (UniqueName: \"kubernetes.io/projected/48a7ff82-985c-4819-997e-6624e6bdcffc-kube-api-access-whj9t\") pod \"manila-db-create-hc2s9\" (UID: \"48a7ff82-985c-4819-997e-6624e6bdcffc\") " pod="openstack/manila-db-create-hc2s9" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.568654 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de86d52a-13e3-4228-99ab-3e47c27432f8-operator-scripts\") pod \"manila-e4e2-account-create-update-rg4zc\" (UID: \"de86d52a-13e3-4228-99ab-3e47c27432f8\") " pod="openstack/manila-e4e2-account-create-update-rg4zc" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.568735 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmwr2\" (UniqueName: \"kubernetes.io/projected/de86d52a-13e3-4228-99ab-3e47c27432f8-kube-api-access-kmwr2\") pod \"manila-e4e2-account-create-update-rg4zc\" (UID: \"de86d52a-13e3-4228-99ab-3e47c27432f8\") " pod="openstack/manila-e4e2-account-create-update-rg4zc" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.569878 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de86d52a-13e3-4228-99ab-3e47c27432f8-operator-scripts\") pod \"manila-e4e2-account-create-update-rg4zc\" (UID: \"de86d52a-13e3-4228-99ab-3e47c27432f8\") " pod="openstack/manila-e4e2-account-create-update-rg4zc" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.610072 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmwr2\" (UniqueName: \"kubernetes.io/projected/de86d52a-13e3-4228-99ab-3e47c27432f8-kube-api-access-kmwr2\") pod \"manila-e4e2-account-create-update-rg4zc\" (UID: \"de86d52a-13e3-4228-99ab-3e47c27432f8\") " pod="openstack/manila-e4e2-account-create-update-rg4zc" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.613460 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hc2s9" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.624102 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.628839 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.642628 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.646457 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.646487 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.649949 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.652085 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nm7mr" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.670816 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.676220 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.680174 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.680656 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.702560 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.704921 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-e4e2-account-create-update-rg4zc" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.771974 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd7f8\" (UniqueName: \"kubernetes.io/projected/f08433cb-bda2-4072-a5cb-b3ca302d032f-kube-api-access-qd7f8\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772027 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772061 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c337780e-6bca-4513-b48d-11b3773ac33b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772100 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f08433cb-bda2-4072-a5cb-b3ca302d032f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772133 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c337780e-6bca-4513-b48d-11b3773ac33b-scripts\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772169 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f08433cb-bda2-4072-a5cb-b3ca302d032f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772190 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f08433cb-bda2-4072-a5cb-b3ca302d032f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772225 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c337780e-6bca-4513-b48d-11b3773ac33b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772269 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c337780e-6bca-4513-b48d-11b3773ac33b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772304 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c337780e-6bca-4513-b48d-11b3773ac33b-config-data\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772328 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f08433cb-bda2-4072-a5cb-b3ca302d032f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772568 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c337780e-6bca-4513-b48d-11b3773ac33b-ceph\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772632 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f08433cb-bda2-4072-a5cb-b3ca302d032f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772674 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772705 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f08433cb-bda2-4072-a5cb-b3ca302d032f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772767 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c337780e-6bca-4513-b48d-11b3773ac33b-logs\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772800 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f08433cb-bda2-4072-a5cb-b3ca302d032f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.772856 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lxdk\" (UniqueName: \"kubernetes.io/projected/c337780e-6bca-4513-b48d-11b3773ac33b-kube-api-access-7lxdk\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874639 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd7f8\" (UniqueName: \"kubernetes.io/projected/f08433cb-bda2-4072-a5cb-b3ca302d032f-kube-api-access-qd7f8\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874692 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874718 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c337780e-6bca-4513-b48d-11b3773ac33b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874743 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f08433cb-bda2-4072-a5cb-b3ca302d032f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874766 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c337780e-6bca-4513-b48d-11b3773ac33b-scripts\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874794 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f08433cb-bda2-4072-a5cb-b3ca302d032f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874810 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f08433cb-bda2-4072-a5cb-b3ca302d032f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874839 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c337780e-6bca-4513-b48d-11b3773ac33b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874871 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c337780e-6bca-4513-b48d-11b3773ac33b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874898 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c337780e-6bca-4513-b48d-11b3773ac33b-config-data\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874918 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f08433cb-bda2-4072-a5cb-b3ca302d032f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874955 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c337780e-6bca-4513-b48d-11b3773ac33b-ceph\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874973 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f08433cb-bda2-4072-a5cb-b3ca302d032f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.874995 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.875012 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f08433cb-bda2-4072-a5cb-b3ca302d032f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.875038 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c337780e-6bca-4513-b48d-11b3773ac33b-logs\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.875060 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f08433cb-bda2-4072-a5cb-b3ca302d032f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.875082 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lxdk\" (UniqueName: \"kubernetes.io/projected/c337780e-6bca-4513-b48d-11b3773ac33b-kube-api-access-7lxdk\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.875101 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.875611 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.877138 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c337780e-6bca-4513-b48d-11b3773ac33b-logs\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.878563 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f08433cb-bda2-4072-a5cb-b3ca302d032f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.880537 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f08433cb-bda2-4072-a5cb-b3ca302d032f-logs\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.881098 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c337780e-6bca-4513-b48d-11b3773ac33b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.881918 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f08433cb-bda2-4072-a5cb-b3ca302d032f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.882810 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f08433cb-bda2-4072-a5cb-b3ca302d032f-ceph\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.883484 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c337780e-6bca-4513-b48d-11b3773ac33b-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.886083 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c337780e-6bca-4513-b48d-11b3773ac33b-config-data\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.894194 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.896162 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f08433cb-bda2-4072-a5cb-b3ca302d032f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.897660 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lxdk\" (UniqueName: \"kubernetes.io/projected/c337780e-6bca-4513-b48d-11b3773ac33b-kube-api-access-7lxdk\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.898650 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c337780e-6bca-4513-b48d-11b3773ac33b-scripts\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.901948 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f08433cb-bda2-4072-a5cb-b3ca302d032f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.905051 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd7f8\" (UniqueName: \"kubernetes.io/projected/f08433cb-bda2-4072-a5cb-b3ca302d032f-kube-api-access-qd7f8\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.905775 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f08433cb-bda2-4072-a5cb-b3ca302d032f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.913993 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/c337780e-6bca-4513-b48d-11b3773ac33b-ceph\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.914509 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.914928 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c337780e-6bca-4513-b48d-11b3773ac33b-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.930817 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-0\" (UID: \"c337780e-6bca-4513-b48d-11b3773ac33b\") " pod="openstack/glance-default-external-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.933230 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"f08433cb-bda2-4072-a5cb-b3ca302d032f\") " pod="openstack/glance-default-internal-api-0" Jan 26 15:39:34 crc kubenswrapper[4823]: I0126 15:39:34.969678 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.019126 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.173896 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-hc2s9"] Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.299034 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-e4e2-account-create-update-rg4zc"] Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.650123 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.752535 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 26 15:39:35 crc kubenswrapper[4823]: W0126 15:39:35.771780 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc337780e_6bca_4513_b48d_11b3773ac33b.slice/crio-1b59021947b0ed66b45e0bafefbc07a4d7403c37e82a87796a998c8b7e8ba71f WatchSource:0}: Error finding container 1b59021947b0ed66b45e0bafefbc07a4d7403c37e82a87796a998c8b7e8ba71f: Status 404 returned error can't find the container with id 1b59021947b0ed66b45e0bafefbc07a4d7403c37e82a87796a998c8b7e8ba71f Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.819522 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c337780e-6bca-4513-b48d-11b3773ac33b","Type":"ContainerStarted","Data":"1b59021947b0ed66b45e0bafefbc07a4d7403c37e82a87796a998c8b7e8ba71f"} Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.825857 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hc2s9" event={"ID":"48a7ff82-985c-4819-997e-6624e6bdcffc","Type":"ContainerStarted","Data":"90cce2b1c328ee0ee80fda485a806ad12124ef97c3fb89b82c3f61918a934fbd"} Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.828781 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-e4e2-account-create-update-rg4zc" event={"ID":"de86d52a-13e3-4228-99ab-3e47c27432f8","Type":"ContainerStarted","Data":"23df354ddc1ee2877e69aed6aefaf412469aa03a2ed91d8acea089341b723cee"} Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.828822 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-e4e2-account-create-update-rg4zc" event={"ID":"de86d52a-13e3-4228-99ab-3e47c27432f8","Type":"ContainerStarted","Data":"e54410805810dbe105f8c8f014744d6c1eba862a5175bdd6d84d765105e7f1b9"} Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.838084 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"a3ab756b-769b-47fd-8ade-e462a900db55","Type":"ContainerStarted","Data":"1013636b24b05b361a90533743caca4c34b5134f7d33b1000309644fc1d769c9"} Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.840275 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa","Type":"ContainerStarted","Data":"46bdefb7ba002c85b042e626b02d231968f054b26f1ac4c5374989fab61c1d18"} Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.857022 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-e4e2-account-create-update-rg4zc" podStartSLOduration=1.8570035900000001 podStartE2EDuration="1.85700359s" podCreationTimestamp="2026-01-26 15:39:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:39:35.855124449 +0000 UTC m=+3172.540587554" watchObservedRunningTime="2026-01-26 15:39:35.85700359 +0000 UTC m=+3172.542466695" Jan 26 15:39:35 crc kubenswrapper[4823]: I0126 15:39:35.881067 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 26 15:39:35 crc kubenswrapper[4823]: W0126 15:39:35.973407 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf08433cb_bda2_4072_a5cb_b3ca302d032f.slice/crio-674f574ca9afd502542302b896a5b6a16ba83071681f97def329cbf419c24fc2 WatchSource:0}: Error finding container 674f574ca9afd502542302b896a5b6a16ba83071681f97def329cbf419c24fc2: Status 404 returned error can't find the container with id 674f574ca9afd502542302b896a5b6a16ba83071681f97def329cbf419c24fc2 Jan 26 15:39:36 crc kubenswrapper[4823]: I0126 15:39:36.851822 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f08433cb-bda2-4072-a5cb-b3ca302d032f","Type":"ContainerStarted","Data":"0decd84705e64b17b7e8f4dd530fd72e617f2cfeb1607c24b7d58f06b4dd7ac1"} Jan 26 15:39:36 crc kubenswrapper[4823]: I0126 15:39:36.852422 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f08433cb-bda2-4072-a5cb-b3ca302d032f","Type":"ContainerStarted","Data":"674f574ca9afd502542302b896a5b6a16ba83071681f97def329cbf419c24fc2"} Jan 26 15:39:36 crc kubenswrapper[4823]: I0126 15:39:36.853406 4823 generic.go:334] "Generic (PLEG): container finished" podID="de86d52a-13e3-4228-99ab-3e47c27432f8" containerID="23df354ddc1ee2877e69aed6aefaf412469aa03a2ed91d8acea089341b723cee" exitCode=0 Jan 26 15:39:36 crc kubenswrapper[4823]: I0126 15:39:36.853459 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-e4e2-account-create-update-rg4zc" event={"ID":"de86d52a-13e3-4228-99ab-3e47c27432f8","Type":"ContainerDied","Data":"23df354ddc1ee2877e69aed6aefaf412469aa03a2ed91d8acea089341b723cee"} Jan 26 15:39:36 crc kubenswrapper[4823]: I0126 15:39:36.870633 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa","Type":"ContainerStarted","Data":"0177b99e626bf7ca985f22a0074a7c6c9902ab98cd08827b627b1a0437d573fa"} Jan 26 15:39:36 crc kubenswrapper[4823]: I0126 15:39:36.870672 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"b85d2c1d-42f6-4e32-a614-e8ddc9e888fa","Type":"ContainerStarted","Data":"4bc88afd3973454e0e492583e485cd684b4d8dbf1dff7df076a6a37c13a03994"} Jan 26 15:39:36 crc kubenswrapper[4823]: I0126 15:39:36.909292 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c337780e-6bca-4513-b48d-11b3773ac33b","Type":"ContainerStarted","Data":"6a8458fef6da8043e2ad77e191191e96afcc37ad0df1682eab6ad0c0f2df133c"} Jan 26 15:39:36 crc kubenswrapper[4823]: I0126 15:39:36.911967 4823 generic.go:334] "Generic (PLEG): container finished" podID="48a7ff82-985c-4819-997e-6624e6bdcffc" containerID="855fe1999b46e27a25b8e12d505cc27fa6c1caef786cdddc7587444a91846592" exitCode=0 Jan 26 15:39:36 crc kubenswrapper[4823]: I0126 15:39:36.912057 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hc2s9" event={"ID":"48a7ff82-985c-4819-997e-6624e6bdcffc","Type":"ContainerDied","Data":"855fe1999b46e27a25b8e12d505cc27fa6c1caef786cdddc7587444a91846592"} Jan 26 15:39:36 crc kubenswrapper[4823]: I0126 15:39:36.948339 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=2.820040306 podStartE2EDuration="3.948309872s" podCreationTimestamp="2026-01-26 15:39:33 +0000 UTC" firstStartedPulling="2026-01-26 15:39:34.914204995 +0000 UTC m=+3171.599668100" lastFinishedPulling="2026-01-26 15:39:36.042474561 +0000 UTC m=+3172.727937666" observedRunningTime="2026-01-26 15:39:36.921575157 +0000 UTC m=+3173.607038262" watchObservedRunningTime="2026-01-26 15:39:36.948309872 +0000 UTC m=+3173.633772977" Jan 26 15:39:37 crc kubenswrapper[4823]: I0126 15:39:37.925542 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"a3ab756b-769b-47fd-8ade-e462a900db55","Type":"ContainerStarted","Data":"3825cd94d7ee17f5f91ddea9eea95bd862d9142b17efe0123116f4d13cefdf89"} Jan 26 15:39:37 crc kubenswrapper[4823]: I0126 15:39:37.926058 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"a3ab756b-769b-47fd-8ade-e462a900db55","Type":"ContainerStarted","Data":"062a295a659c5503cda8314a031b8652bf08702faebbc8c39ed4de886c710ac8"} Jan 26 15:39:37 crc kubenswrapper[4823]: I0126 15:39:37.930559 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c337780e-6bca-4513-b48d-11b3773ac33b","Type":"ContainerStarted","Data":"3966278af7497e0221470fe729215230febd34949e7b97c802cdbd4225316799"} Jan 26 15:39:37 crc kubenswrapper[4823]: I0126 15:39:37.934849 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f08433cb-bda2-4072-a5cb-b3ca302d032f","Type":"ContainerStarted","Data":"22195772bc05d41af62d2739f4c4e2f4be1c476df3467dcba350923a20bb9706"} Jan 26 15:39:37 crc kubenswrapper[4823]: I0126 15:39:37.972969 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.724493691 podStartE2EDuration="4.972951476s" podCreationTimestamp="2026-01-26 15:39:33 +0000 UTC" firstStartedPulling="2026-01-26 15:39:35.769571468 +0000 UTC m=+3172.455034573" lastFinishedPulling="2026-01-26 15:39:37.018029253 +0000 UTC m=+3173.703492358" observedRunningTime="2026-01-26 15:39:37.959779969 +0000 UTC m=+3174.645243074" watchObservedRunningTime="2026-01-26 15:39:37.972951476 +0000 UTC m=+3174.658414581" Jan 26 15:39:37 crc kubenswrapper[4823]: I0126 15:39:37.993524 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.993500523 podStartE2EDuration="4.993500523s" podCreationTimestamp="2026-01-26 15:39:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:39:37.986783862 +0000 UTC m=+3174.672246967" watchObservedRunningTime="2026-01-26 15:39:37.993500523 +0000 UTC m=+3174.678963618" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.013485 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.013466786 podStartE2EDuration="5.013466786s" podCreationTimestamp="2026-01-26 15:39:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:39:38.010934227 +0000 UTC m=+3174.696397332" watchObservedRunningTime="2026-01-26 15:39:38.013466786 +0000 UTC m=+3174.698929891" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.483488 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-e4e2-account-create-update-rg4zc" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.490128 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hc2s9" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.674167 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whj9t\" (UniqueName: \"kubernetes.io/projected/48a7ff82-985c-4819-997e-6624e6bdcffc-kube-api-access-whj9t\") pod \"48a7ff82-985c-4819-997e-6624e6bdcffc\" (UID: \"48a7ff82-985c-4819-997e-6624e6bdcffc\") " Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.674301 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de86d52a-13e3-4228-99ab-3e47c27432f8-operator-scripts\") pod \"de86d52a-13e3-4228-99ab-3e47c27432f8\" (UID: \"de86d52a-13e3-4228-99ab-3e47c27432f8\") " Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.674356 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a7ff82-985c-4819-997e-6624e6bdcffc-operator-scripts\") pod \"48a7ff82-985c-4819-997e-6624e6bdcffc\" (UID: \"48a7ff82-985c-4819-997e-6624e6bdcffc\") " Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.674462 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmwr2\" (UniqueName: \"kubernetes.io/projected/de86d52a-13e3-4228-99ab-3e47c27432f8-kube-api-access-kmwr2\") pod \"de86d52a-13e3-4228-99ab-3e47c27432f8\" (UID: \"de86d52a-13e3-4228-99ab-3e47c27432f8\") " Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.675203 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de86d52a-13e3-4228-99ab-3e47c27432f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de86d52a-13e3-4228-99ab-3e47c27432f8" (UID: "de86d52a-13e3-4228-99ab-3e47c27432f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.675310 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a7ff82-985c-4819-997e-6624e6bdcffc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48a7ff82-985c-4819-997e-6624e6bdcffc" (UID: "48a7ff82-985c-4819-997e-6624e6bdcffc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.685786 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de86d52a-13e3-4228-99ab-3e47c27432f8-kube-api-access-kmwr2" (OuterVolumeSpecName: "kube-api-access-kmwr2") pod "de86d52a-13e3-4228-99ab-3e47c27432f8" (UID: "de86d52a-13e3-4228-99ab-3e47c27432f8"). InnerVolumeSpecName "kube-api-access-kmwr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.698303 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a7ff82-985c-4819-997e-6624e6bdcffc-kube-api-access-whj9t" (OuterVolumeSpecName: "kube-api-access-whj9t") pod "48a7ff82-985c-4819-997e-6624e6bdcffc" (UID: "48a7ff82-985c-4819-997e-6624e6bdcffc"). InnerVolumeSpecName "kube-api-access-whj9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.777535 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whj9t\" (UniqueName: \"kubernetes.io/projected/48a7ff82-985c-4819-997e-6624e6bdcffc-kube-api-access-whj9t\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.777577 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de86d52a-13e3-4228-99ab-3e47c27432f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.777589 4823 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a7ff82-985c-4819-997e-6624e6bdcffc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.777603 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmwr2\" (UniqueName: \"kubernetes.io/projected/de86d52a-13e3-4228-99ab-3e47c27432f8-kube-api-access-kmwr2\") on node \"crc\" DevicePath \"\"" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.943474 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-hc2s9" event={"ID":"48a7ff82-985c-4819-997e-6624e6bdcffc","Type":"ContainerDied","Data":"90cce2b1c328ee0ee80fda485a806ad12124ef97c3fb89b82c3f61918a934fbd"} Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.943513 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90cce2b1c328ee0ee80fda485a806ad12124ef97c3fb89b82c3f61918a934fbd" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.943515 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-hc2s9" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.946271 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-e4e2-account-create-update-rg4zc" Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.946220 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-e4e2-account-create-update-rg4zc" event={"ID":"de86d52a-13e3-4228-99ab-3e47c27432f8","Type":"ContainerDied","Data":"e54410805810dbe105f8c8f014744d6c1eba862a5175bdd6d84d765105e7f1b9"} Jan 26 15:39:38 crc kubenswrapper[4823]: I0126 15:39:38.946348 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e54410805810dbe105f8c8f014744d6c1eba862a5175bdd6d84d765105e7f1b9" Jan 26 15:39:39 crc kubenswrapper[4823]: I0126 15:39:39.151621 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Jan 26 15:39:39 crc kubenswrapper[4823]: I0126 15:39:39.166218 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:39 crc kubenswrapper[4823]: I0126 15:39:39.561913 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:39:39 crc kubenswrapper[4823]: E0126 15:39:39.562551 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.380463 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.381065 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.521513 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-7ws9r"] Jan 26 15:39:44 crc kubenswrapper[4823]: E0126 15:39:44.521889 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a7ff82-985c-4819-997e-6624e6bdcffc" containerName="mariadb-database-create" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.521906 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a7ff82-985c-4819-997e-6624e6bdcffc" containerName="mariadb-database-create" Jan 26 15:39:44 crc kubenswrapper[4823]: E0126 15:39:44.521940 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de86d52a-13e3-4228-99ab-3e47c27432f8" containerName="mariadb-account-create-update" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.521947 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="de86d52a-13e3-4228-99ab-3e47c27432f8" containerName="mariadb-account-create-update" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.522118 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a7ff82-985c-4819-997e-6624e6bdcffc" containerName="mariadb-database-create" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.522131 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="de86d52a-13e3-4228-99ab-3e47c27432f8" containerName="mariadb-account-create-update" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.522755 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.524971 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-8w2gh" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.525170 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.537951 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-7ws9r"] Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.610720 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-job-config-data\") pod \"manila-db-sync-7ws9r\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.610821 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j227\" (UniqueName: \"kubernetes.io/projected/8d687761-776a-49bd-ab09-d5672e514edc-kube-api-access-5j227\") pod \"manila-db-sync-7ws9r\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.610975 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-combined-ca-bundle\") pod \"manila-db-sync-7ws9r\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.611035 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-config-data\") pod \"manila-db-sync-7ws9r\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.712586 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-combined-ca-bundle\") pod \"manila-db-sync-7ws9r\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.712972 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-config-data\") pod \"manila-db-sync-7ws9r\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.713108 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-job-config-data\") pod \"manila-db-sync-7ws9r\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.713222 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j227\" (UniqueName: \"kubernetes.io/projected/8d687761-776a-49bd-ab09-d5672e514edc-kube-api-access-5j227\") pod \"manila-db-sync-7ws9r\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.718741 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-combined-ca-bundle\") pod \"manila-db-sync-7ws9r\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.718929 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-job-config-data\") pod \"manila-db-sync-7ws9r\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.730816 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j227\" (UniqueName: \"kubernetes.io/projected/8d687761-776a-49bd-ab09-d5672e514edc-kube-api-access-5j227\") pod \"manila-db-sync-7ws9r\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.731004 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-config-data\") pod \"manila-db-sync-7ws9r\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.841511 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-7ws9r" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.970914 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 15:39:44 crc kubenswrapper[4823]: I0126 15:39:44.971157 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 26 15:39:45 crc kubenswrapper[4823]: I0126 15:39:45.013324 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 15:39:45 crc kubenswrapper[4823]: I0126 15:39:45.014137 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 15:39:45 crc kubenswrapper[4823]: I0126 15:39:45.020632 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 15:39:45 crc kubenswrapper[4823]: I0126 15:39:45.020673 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 26 15:39:45 crc kubenswrapper[4823]: I0126 15:39:45.048827 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 26 15:39:45 crc kubenswrapper[4823]: I0126 15:39:45.064747 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 15:39:45 crc kubenswrapper[4823]: I0126 15:39:45.083105 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 26 15:39:45 crc kubenswrapper[4823]: I0126 15:39:45.458740 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-7ws9r"] Jan 26 15:39:46 crc kubenswrapper[4823]: I0126 15:39:46.018116 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-7ws9r" event={"ID":"8d687761-776a-49bd-ab09-d5672e514edc","Type":"ContainerStarted","Data":"d06503bf573a5e297f2fda8c405fe35a7fc16a92e009121dafc766d3b326b9ec"} Jan 26 15:39:46 crc kubenswrapper[4823]: I0126 15:39:46.019428 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 26 15:39:46 crc kubenswrapper[4823]: I0126 15:39:46.019559 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 15:39:46 crc kubenswrapper[4823]: I0126 15:39:46.019678 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 26 15:39:47 crc kubenswrapper[4823]: I0126 15:39:47.057639 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:39:48 crc kubenswrapper[4823]: I0126 15:39:48.045314 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 15:39:48 crc kubenswrapper[4823]: I0126 15:39:48.066010 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:39:48 crc kubenswrapper[4823]: I0126 15:39:48.066050 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:39:48 crc kubenswrapper[4823]: I0126 15:39:48.066087 4823 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 26 15:39:48 crc kubenswrapper[4823]: I0126 15:39:48.114947 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 26 15:39:48 crc kubenswrapper[4823]: I0126 15:39:48.829480 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 15:39:48 crc kubenswrapper[4823]: I0126 15:39:48.830601 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 26 15:39:51 crc kubenswrapper[4823]: I0126 15:39:51.096145 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-7ws9r" event={"ID":"8d687761-776a-49bd-ab09-d5672e514edc","Type":"ContainerStarted","Data":"5fb8157f74a685e0bae4d57bb3787cf9f8186e6f7219e1ea08f7f5b975829bf2"} Jan 26 15:39:51 crc kubenswrapper[4823]: I0126 15:39:51.126788 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-7ws9r" podStartSLOduration=2.345642151 podStartE2EDuration="7.126768195s" podCreationTimestamp="2026-01-26 15:39:44 +0000 UTC" firstStartedPulling="2026-01-26 15:39:45.466303709 +0000 UTC m=+3182.151766814" lastFinishedPulling="2026-01-26 15:39:50.247429753 +0000 UTC m=+3186.932892858" observedRunningTime="2026-01-26 15:39:51.117648127 +0000 UTC m=+3187.803111242" watchObservedRunningTime="2026-01-26 15:39:51.126768195 +0000 UTC m=+3187.812231300" Jan 26 15:39:54 crc kubenswrapper[4823]: I0126 15:39:54.561083 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:39:54 crc kubenswrapper[4823]: E0126 15:39:54.562089 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:40:01 crc kubenswrapper[4823]: I0126 15:40:01.201877 4823 generic.go:334] "Generic (PLEG): container finished" podID="8d687761-776a-49bd-ab09-d5672e514edc" containerID="5fb8157f74a685e0bae4d57bb3787cf9f8186e6f7219e1ea08f7f5b975829bf2" exitCode=0 Jan 26 15:40:01 crc kubenswrapper[4823]: I0126 15:40:01.201979 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-7ws9r" event={"ID":"8d687761-776a-49bd-ab09-d5672e514edc","Type":"ContainerDied","Data":"5fb8157f74a685e0bae4d57bb3787cf9f8186e6f7219e1ea08f7f5b975829bf2"} Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.631247 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-7ws9r" Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.722702 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-combined-ca-bundle\") pod \"8d687761-776a-49bd-ab09-d5672e514edc\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.722749 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-job-config-data\") pod \"8d687761-776a-49bd-ab09-d5672e514edc\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.722910 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-config-data\") pod \"8d687761-776a-49bd-ab09-d5672e514edc\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.723070 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j227\" (UniqueName: \"kubernetes.io/projected/8d687761-776a-49bd-ab09-d5672e514edc-kube-api-access-5j227\") pod \"8d687761-776a-49bd-ab09-d5672e514edc\" (UID: \"8d687761-776a-49bd-ab09-d5672e514edc\") " Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.730589 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d687761-776a-49bd-ab09-d5672e514edc-kube-api-access-5j227" (OuterVolumeSpecName: "kube-api-access-5j227") pod "8d687761-776a-49bd-ab09-d5672e514edc" (UID: "8d687761-776a-49bd-ab09-d5672e514edc"). InnerVolumeSpecName "kube-api-access-5j227". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.730933 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "8d687761-776a-49bd-ab09-d5672e514edc" (UID: "8d687761-776a-49bd-ab09-d5672e514edc"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.735834 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-config-data" (OuterVolumeSpecName: "config-data") pod "8d687761-776a-49bd-ab09-d5672e514edc" (UID: "8d687761-776a-49bd-ab09-d5672e514edc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.752512 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d687761-776a-49bd-ab09-d5672e514edc" (UID: "8d687761-776a-49bd-ab09-d5672e514edc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.825878 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5j227\" (UniqueName: \"kubernetes.io/projected/8d687761-776a-49bd-ab09-d5672e514edc-kube-api-access-5j227\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.825917 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.825930 4823 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-job-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:02 crc kubenswrapper[4823]: I0126 15:40:02.825940 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d687761-776a-49bd-ab09-d5672e514edc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.223587 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-7ws9r" event={"ID":"8d687761-776a-49bd-ab09-d5672e514edc","Type":"ContainerDied","Data":"d06503bf573a5e297f2fda8c405fe35a7fc16a92e009121dafc766d3b326b9ec"} Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.223643 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d06503bf573a5e297f2fda8c405fe35a7fc16a92e009121dafc766d3b326b9ec" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.223737 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-7ws9r" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.547534 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 15:40:03 crc kubenswrapper[4823]: E0126 15:40:03.548062 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d687761-776a-49bd-ab09-d5672e514edc" containerName="manila-db-sync" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.548086 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d687761-776a-49bd-ab09-d5672e514edc" containerName="manila-db-sync" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.548333 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d687761-776a-49bd-ab09-d5672e514edc" containerName="manila-db-sync" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.552313 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.554675 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.555444 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-8w2gh" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.555738 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.566190 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.573752 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.599409 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.616944 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.622633 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.652325 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83770651-fa4d-4cf4-b39c-0e09f0658a3f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.653886 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.655870 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.665861 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4bln\" (UniqueName: \"kubernetes.io/projected/83770651-fa4d-4cf4-b39c-0e09f0658a3f-kube-api-access-t4bln\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.666134 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-config-data\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.666159 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-scripts\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.683443 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.705771 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-5pz9v"] Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.714658 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.726555 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-5pz9v"] Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.768440 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6a1fe764-ed5b-4457-bc99-78fa9b816588-ceph\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.768699 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83770651-fa4d-4cf4-b39c-0e09f0658a3f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.768815 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.768909 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-scripts\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.768989 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nhwb\" (UniqueName: \"kubernetes.io/projected/6a1fe764-ed5b-4457-bc99-78fa9b816588-kube-api-access-6nhwb\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.769108 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.769184 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4bln\" (UniqueName: \"kubernetes.io/projected/83770651-fa4d-4cf4-b39c-0e09f0658a3f-kube-api-access-t4bln\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.769260 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.769338 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.769432 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a1fe764-ed5b-4457-bc99-78fa9b816588-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.771712 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-config-data\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.771835 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-config-data\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.771907 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-scripts\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.772330 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/6a1fe764-ed5b-4457-bc99-78fa9b816588-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.769976 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83770651-fa4d-4cf4-b39c-0e09f0658a3f-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.778070 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.779499 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-config-data\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.783247 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.790271 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4bln\" (UniqueName: \"kubernetes.io/projected/83770651-fa4d-4cf4-b39c-0e09f0658a3f-kube-api-access-t4bln\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.800973 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-scripts\") pod \"manila-scheduler-0\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.873806 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5jrd\" (UniqueName: \"kubernetes.io/projected/6b032822-a0f5-42d5-81d4-a3804a3714b9-kube-api-access-n5jrd\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874175 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-scripts\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874211 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nhwb\" (UniqueName: \"kubernetes.io/projected/6a1fe764-ed5b-4457-bc99-78fa9b816588-kube-api-access-6nhwb\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874244 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874293 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-config\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874327 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874353 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874386 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874407 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a1fe764-ed5b-4457-bc99-78fa9b816588-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874437 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-config-data\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874460 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/6a1fe764-ed5b-4457-bc99-78fa9b816588-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874481 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6a1fe764-ed5b-4457-bc99-78fa9b816588-ceph\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874565 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874895 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a1fe764-ed5b-4457-bc99-78fa9b816588-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.874914 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/6a1fe764-ed5b-4457-bc99-78fa9b816588-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.878893 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.878984 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-config-data\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.879270 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6a1fe764-ed5b-4457-bc99-78fa9b816588-ceph\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.884993 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.886349 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-scripts\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.894596 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.900264 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nhwb\" (UniqueName: \"kubernetes.io/projected/6a1fe764-ed5b-4457-bc99-78fa9b816588-kube-api-access-6nhwb\") pod \"manila-share-share1-0\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.975585 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.976442 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.976512 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-config\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.976545 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.976588 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.976651 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.976672 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5jrd\" (UniqueName: \"kubernetes.io/projected/6b032822-a0f5-42d5-81d4-a3804a3714b9-kube-api-access-n5jrd\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.977535 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-ovsdbserver-sb\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.977565 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-config\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.977587 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-ovsdbserver-nb\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.977696 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-dns-svc\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:03 crc kubenswrapper[4823]: I0126 15:40:03.978177 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/6b032822-a0f5-42d5-81d4-a3804a3714b9-openstack-edpm-ipam\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.012353 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5jrd\" (UniqueName: \"kubernetes.io/projected/6b032822-a0f5-42d5-81d4-a3804a3714b9-kube-api-access-n5jrd\") pod \"dnsmasq-dns-69655fd4bf-5pz9v\" (UID: \"6b032822-a0f5-42d5-81d4-a3804a3714b9\") " pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.022495 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.024461 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.028136 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.038710 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.041421 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.184785 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-config-data-custom\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.184844 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-852rm\" (UniqueName: \"kubernetes.io/projected/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-kube-api-access-852rm\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.184887 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-scripts\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.184923 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-config-data\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.185006 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.185314 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-logs\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.185860 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-etc-machine-id\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.287832 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.287913 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-logs\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.287942 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-etc-machine-id\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.288023 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-config-data-custom\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.288041 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-852rm\" (UniqueName: \"kubernetes.io/projected/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-kube-api-access-852rm\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.288062 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-scripts\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.288078 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-config-data\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.288933 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-etc-machine-id\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.290101 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-logs\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.295101 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-scripts\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.295159 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.295348 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-config-data\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.295759 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-config-data-custom\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.307415 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-852rm\" (UniqueName: \"kubernetes.io/projected/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-kube-api-access-852rm\") pod \"manila-api-0\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.374452 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.459976 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 15:40:04 crc kubenswrapper[4823]: W0126 15:40:04.470604 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83770651_fa4d_4cf4_b39c_0e09f0658a3f.slice/crio-86948ec6ed05bb806e8f71ba839220167b51dd8bd993f0dc84cffcdcaaa293ea WatchSource:0}: Error finding container 86948ec6ed05bb806e8f71ba839220167b51dd8bd993f0dc84cffcdcaaa293ea: Status 404 returned error can't find the container with id 86948ec6ed05bb806e8f71ba839220167b51dd8bd993f0dc84cffcdcaaa293ea Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.668594 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69655fd4bf-5pz9v"] Jan 26 15:40:04 crc kubenswrapper[4823]: I0126 15:40:04.684749 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 15:40:05 crc kubenswrapper[4823]: I0126 15:40:05.048659 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 26 15:40:05 crc kubenswrapper[4823]: W0126 15:40:05.074165 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27b9e21d_79f4_49f8_b0b3_9f3d301aa1f4.slice/crio-07f228c359c7bb11b3b9bc6abc1c56e23b592b1dfcf068e0411669d992b16ac7 WatchSource:0}: Error finding container 07f228c359c7bb11b3b9bc6abc1c56e23b592b1dfcf068e0411669d992b16ac7: Status 404 returned error can't find the container with id 07f228c359c7bb11b3b9bc6abc1c56e23b592b1dfcf068e0411669d992b16ac7 Jan 26 15:40:05 crc kubenswrapper[4823]: I0126 15:40:05.256346 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"6a1fe764-ed5b-4457-bc99-78fa9b816588","Type":"ContainerStarted","Data":"dde2799f456b53dfd449e79ed7d5fa36df6a4a98ba4062e384bb3ddc76f72d83"} Jan 26 15:40:05 crc kubenswrapper[4823]: I0126 15:40:05.257891 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4","Type":"ContainerStarted","Data":"07f228c359c7bb11b3b9bc6abc1c56e23b592b1dfcf068e0411669d992b16ac7"} Jan 26 15:40:05 crc kubenswrapper[4823]: I0126 15:40:05.266204 4823 generic.go:334] "Generic (PLEG): container finished" podID="6b032822-a0f5-42d5-81d4-a3804a3714b9" containerID="a0519e53d11fef8dd88c0256196d7aa857ac06383a2c93c0449f94348ca4d86b" exitCode=0 Jan 26 15:40:05 crc kubenswrapper[4823]: I0126 15:40:05.266603 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" event={"ID":"6b032822-a0f5-42d5-81d4-a3804a3714b9","Type":"ContainerDied","Data":"a0519e53d11fef8dd88c0256196d7aa857ac06383a2c93c0449f94348ca4d86b"} Jan 26 15:40:05 crc kubenswrapper[4823]: I0126 15:40:05.266668 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" event={"ID":"6b032822-a0f5-42d5-81d4-a3804a3714b9","Type":"ContainerStarted","Data":"9414cca9996d2601eff5e89c8f8864aadb6a96173f802b8132ef05cec8e12737"} Jan 26 15:40:05 crc kubenswrapper[4823]: I0126 15:40:05.271234 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"83770651-fa4d-4cf4-b39c-0e09f0658a3f","Type":"ContainerStarted","Data":"86948ec6ed05bb806e8f71ba839220167b51dd8bd993f0dc84cffcdcaaa293ea"} Jan 26 15:40:06 crc kubenswrapper[4823]: I0126 15:40:06.317813 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" event={"ID":"6b032822-a0f5-42d5-81d4-a3804a3714b9","Type":"ContainerStarted","Data":"6f92e94336e2d9be21c3462e4192efb83a5df35b2f6515198b499a3eec2b53ae"} Jan 26 15:40:06 crc kubenswrapper[4823]: I0126 15:40:06.318923 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:06 crc kubenswrapper[4823]: I0126 15:40:06.333829 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"83770651-fa4d-4cf4-b39c-0e09f0658a3f","Type":"ContainerStarted","Data":"edb973cf8247d86e16a1b340e1950dbe24d156e4bd3be33e0127003e4995d237"} Jan 26 15:40:06 crc kubenswrapper[4823]: I0126 15:40:06.357410 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" podStartSLOduration=3.357383986 podStartE2EDuration="3.357383986s" podCreationTimestamp="2026-01-26 15:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:40:06.343682495 +0000 UTC m=+3203.029145600" watchObservedRunningTime="2026-01-26 15:40:06.357383986 +0000 UTC m=+3203.042847091" Jan 26 15:40:06 crc kubenswrapper[4823]: I0126 15:40:06.358234 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4","Type":"ContainerStarted","Data":"676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb"} Jan 26 15:40:06 crc kubenswrapper[4823]: I0126 15:40:06.470829 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 26 15:40:07 crc kubenswrapper[4823]: I0126 15:40:07.378501 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4","Type":"ContainerStarted","Data":"d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf"} Jan 26 15:40:07 crc kubenswrapper[4823]: I0126 15:40:07.378871 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" containerName="manila-api-log" containerID="cri-o://676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb" gracePeriod=30 Jan 26 15:40:07 crc kubenswrapper[4823]: I0126 15:40:07.379094 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 26 15:40:07 crc kubenswrapper[4823]: I0126 15:40:07.379114 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" containerName="manila-api" containerID="cri-o://d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf" gracePeriod=30 Jan 26 15:40:07 crc kubenswrapper[4823]: I0126 15:40:07.390677 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"83770651-fa4d-4cf4-b39c-0e09f0658a3f","Type":"ContainerStarted","Data":"d465f257e25eaa57d623eed90d1daf25b5bdee1c10c38a8fa120b7821d6eaf3d"} Jan 26 15:40:07 crc kubenswrapper[4823]: I0126 15:40:07.422564 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=4.42254377 podStartE2EDuration="4.42254377s" podCreationTimestamp="2026-01-26 15:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:40:07.406595927 +0000 UTC m=+3204.092059032" watchObservedRunningTime="2026-01-26 15:40:07.42254377 +0000 UTC m=+3204.108006875" Jan 26 15:40:07 crc kubenswrapper[4823]: I0126 15:40:07.442031 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.7871054429999997 podStartE2EDuration="4.442010408s" podCreationTimestamp="2026-01-26 15:40:03 +0000 UTC" firstStartedPulling="2026-01-26 15:40:04.474519162 +0000 UTC m=+3201.159982267" lastFinishedPulling="2026-01-26 15:40:05.129424117 +0000 UTC m=+3201.814887232" observedRunningTime="2026-01-26 15:40:07.429301203 +0000 UTC m=+3204.114764308" watchObservedRunningTime="2026-01-26 15:40:07.442010408 +0000 UTC m=+3204.127473513" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.084034 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.186959 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-config-data-custom\") pod \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.187088 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-logs\") pod \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.187284 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-scripts\") pod \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.187403 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-852rm\" (UniqueName: \"kubernetes.io/projected/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-kube-api-access-852rm\") pod \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.187469 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-config-data\") pod \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.187536 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-etc-machine-id\") pod \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.187571 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-combined-ca-bundle\") pod \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\" (UID: \"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4\") " Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.187661 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-logs" (OuterVolumeSpecName: "logs") pod "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" (UID: "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.187656 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" (UID: "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.188215 4823 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.188237 4823 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-logs\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.209535 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" (UID: "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.213557 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-kube-api-access-852rm" (OuterVolumeSpecName: "kube-api-access-852rm") pod "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" (UID: "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4"). InnerVolumeSpecName "kube-api-access-852rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.222999 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-scripts" (OuterVolumeSpecName: "scripts") pod "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" (UID: "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.256459 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-config-data" (OuterVolumeSpecName: "config-data") pod "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" (UID: "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.287926 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" (UID: "27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.291747 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.291786 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.291803 4823 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.291814 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.291825 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-852rm\" (UniqueName: \"kubernetes.io/projected/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4-kube-api-access-852rm\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.405501 4823 generic.go:334] "Generic (PLEG): container finished" podID="27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" containerID="d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf" exitCode=143 Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.405782 4823 generic.go:334] "Generic (PLEG): container finished" podID="27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" containerID="676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb" exitCode=143 Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.405625 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.405638 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4","Type":"ContainerDied","Data":"d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf"} Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.405943 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4","Type":"ContainerDied","Data":"676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb"} Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.405968 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4","Type":"ContainerDied","Data":"07f228c359c7bb11b3b9bc6abc1c56e23b592b1dfcf068e0411669d992b16ac7"} Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.407088 4823 scope.go:117] "RemoveContainer" containerID="d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.442560 4823 scope.go:117] "RemoveContainer" containerID="676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.444608 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.458647 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.482877 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Jan 26 15:40:08 crc kubenswrapper[4823]: E0126 15:40:08.483459 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" containerName="manila-api-log" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.483484 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" containerName="manila-api-log" Jan 26 15:40:08 crc kubenswrapper[4823]: E0126 15:40:08.483505 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" containerName="manila-api" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.483514 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" containerName="manila-api" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.483739 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" containerName="manila-api-log" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.483765 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" containerName="manila-api" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.485045 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.488190 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.488249 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.494807 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.523733 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.529660 4823 scope.go:117] "RemoveContainer" containerID="d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf" Jan 26 15:40:08 crc kubenswrapper[4823]: E0126 15:40:08.531129 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf\": container with ID starting with d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf not found: ID does not exist" containerID="d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.531170 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf"} err="failed to get container status \"d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf\": rpc error: code = NotFound desc = could not find container \"d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf\": container with ID starting with d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.531195 4823 scope.go:117] "RemoveContainer" containerID="676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb" Jan 26 15:40:08 crc kubenswrapper[4823]: E0126 15:40:08.536513 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb\": container with ID starting with 676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb not found: ID does not exist" containerID="676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.536589 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb"} err="failed to get container status \"676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb\": rpc error: code = NotFound desc = could not find container \"676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb\": container with ID starting with 676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.536621 4823 scope.go:117] "RemoveContainer" containerID="d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.537580 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf"} err="failed to get container status \"d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf\": rpc error: code = NotFound desc = could not find container \"d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf\": container with ID starting with d30229f9cde6f35f704a836a5a137c3561bbcf21cb6192ba977f01e344c16bbf not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.537639 4823 scope.go:117] "RemoveContainer" containerID="676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.538181 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb"} err="failed to get container status \"676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb\": rpc error: code = NotFound desc = could not find container \"676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb\": container with ID starting with 676d8976ac3c21cd1ac99564e8f322be5f5c9aec89cdce55c65840ff2f7d9feb not found: ID does not exist" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.599038 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25f20855-79ca-439f-a558-66d82e32988f-logs\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.599119 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-internal-tls-certs\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.599305 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-scripts\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.599381 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25f20855-79ca-439f-a558-66d82e32988f-etc-machine-id\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.599544 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-config-data\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.599686 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.599823 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-config-data-custom\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.599873 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8gbj\" (UniqueName: \"kubernetes.io/projected/25f20855-79ca-439f-a558-66d82e32988f-kube-api-access-b8gbj\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.600039 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-public-tls-certs\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.702484 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.702558 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-config-data-custom\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.702605 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8gbj\" (UniqueName: \"kubernetes.io/projected/25f20855-79ca-439f-a558-66d82e32988f-kube-api-access-b8gbj\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.702685 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-public-tls-certs\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.702733 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25f20855-79ca-439f-a558-66d82e32988f-logs\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.702779 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-internal-tls-certs\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.702835 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-scripts\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.702867 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25f20855-79ca-439f-a558-66d82e32988f-etc-machine-id\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.702898 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-config-data\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.702995 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/25f20855-79ca-439f-a558-66d82e32988f-etc-machine-id\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.703281 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25f20855-79ca-439f-a558-66d82e32988f-logs\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.706814 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-scripts\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.707550 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.709951 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-internal-tls-certs\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.711168 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-config-data\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.714012 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-config-data-custom\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.723216 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25f20855-79ca-439f-a558-66d82e32988f-public-tls-certs\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.756712 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8gbj\" (UniqueName: \"kubernetes.io/projected/25f20855-79ca-439f-a558-66d82e32988f-kube-api-access-b8gbj\") pod \"manila-api-0\" (UID: \"25f20855-79ca-439f-a558-66d82e32988f\") " pod="openstack/manila-api-0" Jan 26 15:40:08 crc kubenswrapper[4823]: I0126 15:40:08.831253 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Jan 26 15:40:09 crc kubenswrapper[4823]: I0126 15:40:09.486032 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Jan 26 15:40:09 crc kubenswrapper[4823]: W0126 15:40:09.495732 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25f20855_79ca_439f_a558_66d82e32988f.slice/crio-79499d9ab34fdc1955d40854db5d00ac8b038ecb811a88e368bccf0e58e55a9a WatchSource:0}: Error finding container 79499d9ab34fdc1955d40854db5d00ac8b038ecb811a88e368bccf0e58e55a9a: Status 404 returned error can't find the container with id 79499d9ab34fdc1955d40854db5d00ac8b038ecb811a88e368bccf0e58e55a9a Jan 26 15:40:09 crc kubenswrapper[4823]: I0126 15:40:09.560577 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:40:09 crc kubenswrapper[4823]: E0126 15:40:09.560838 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:40:09 crc kubenswrapper[4823]: I0126 15:40:09.573895 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4" path="/var/lib/kubelet/pods/27b9e21d-79f4-49f8-b0b3-9f3d301aa1f4/volumes" Jan 26 15:40:10 crc kubenswrapper[4823]: I0126 15:40:10.433652 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"25f20855-79ca-439f-a558-66d82e32988f","Type":"ContainerStarted","Data":"b53847b408663cce414c600cf899fd5a6a2314fdb78b603cc404c1242ea6da1b"} Jan 26 15:40:10 crc kubenswrapper[4823]: I0126 15:40:10.433960 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"25f20855-79ca-439f-a558-66d82e32988f","Type":"ContainerStarted","Data":"79499d9ab34fdc1955d40854db5d00ac8b038ecb811a88e368bccf0e58e55a9a"} Jan 26 15:40:11 crc kubenswrapper[4823]: I0126 15:40:11.447126 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"25f20855-79ca-439f-a558-66d82e32988f","Type":"ContainerStarted","Data":"3588827c4ded99de0fe8512ca67e225b731dc8e8e1341756a3f40d2eee9a6edc"} Jan 26 15:40:11 crc kubenswrapper[4823]: I0126 15:40:11.447444 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Jan 26 15:40:11 crc kubenswrapper[4823]: I0126 15:40:11.479391 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.479352954 podStartE2EDuration="3.479352954s" podCreationTimestamp="2026-01-26 15:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:40:11.463322138 +0000 UTC m=+3208.148785263" watchObservedRunningTime="2026-01-26 15:40:11.479352954 +0000 UTC m=+3208.164816069" Jan 26 15:40:13 crc kubenswrapper[4823]: I0126 15:40:13.885852 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 26 15:40:14 crc kubenswrapper[4823]: I0126 15:40:14.043030 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69655fd4bf-5pz9v" Jan 26 15:40:14 crc kubenswrapper[4823]: I0126 15:40:14.105392 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zkxk8"] Jan 26 15:40:14 crc kubenswrapper[4823]: I0126 15:40:14.105961 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" podUID="974996f7-bcd0-44da-8861-8d44792fe2b1" containerName="dnsmasq-dns" containerID="cri-o://49d280773794424c372a89d2ec9985e3ec5154a1d1096fb9fe1af5d65f97c189" gracePeriod=10 Jan 26 15:40:14 crc kubenswrapper[4823]: I0126 15:40:14.474989 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"6a1fe764-ed5b-4457-bc99-78fa9b816588","Type":"ContainerStarted","Data":"0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8"} Jan 26 15:40:14 crc kubenswrapper[4823]: I0126 15:40:14.481068 4823 generic.go:334] "Generic (PLEG): container finished" podID="974996f7-bcd0-44da-8861-8d44792fe2b1" containerID="49d280773794424c372a89d2ec9985e3ec5154a1d1096fb9fe1af5d65f97c189" exitCode=0 Jan 26 15:40:14 crc kubenswrapper[4823]: I0126 15:40:14.481123 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" event={"ID":"974996f7-bcd0-44da-8861-8d44792fe2b1","Type":"ContainerDied","Data":"49d280773794424c372a89d2ec9985e3ec5154a1d1096fb9fe1af5d65f97c189"} Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.178469 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.265868 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-ovsdbserver-sb\") pod \"974996f7-bcd0-44da-8861-8d44792fe2b1\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.266031 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-ovsdbserver-nb\") pod \"974996f7-bcd0-44da-8861-8d44792fe2b1\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.266115 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5sqv\" (UniqueName: \"kubernetes.io/projected/974996f7-bcd0-44da-8861-8d44792fe2b1-kube-api-access-z5sqv\") pod \"974996f7-bcd0-44da-8861-8d44792fe2b1\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.266588 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-config\") pod \"974996f7-bcd0-44da-8861-8d44792fe2b1\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.266625 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-openstack-edpm-ipam\") pod \"974996f7-bcd0-44da-8861-8d44792fe2b1\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.266752 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-dns-svc\") pod \"974996f7-bcd0-44da-8861-8d44792fe2b1\" (UID: \"974996f7-bcd0-44da-8861-8d44792fe2b1\") " Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.272587 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/974996f7-bcd0-44da-8861-8d44792fe2b1-kube-api-access-z5sqv" (OuterVolumeSpecName: "kube-api-access-z5sqv") pod "974996f7-bcd0-44da-8861-8d44792fe2b1" (UID: "974996f7-bcd0-44da-8861-8d44792fe2b1"). InnerVolumeSpecName "kube-api-access-z5sqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.357343 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "974996f7-bcd0-44da-8861-8d44792fe2b1" (UID: "974996f7-bcd0-44da-8861-8d44792fe2b1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.361393 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-config" (OuterVolumeSpecName: "config") pod "974996f7-bcd0-44da-8861-8d44792fe2b1" (UID: "974996f7-bcd0-44da-8861-8d44792fe2b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.372197 4823 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-config\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.372236 4823 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.372246 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5sqv\" (UniqueName: \"kubernetes.io/projected/974996f7-bcd0-44da-8861-8d44792fe2b1-kube-api-access-z5sqv\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.374819 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "974996f7-bcd0-44da-8861-8d44792fe2b1" (UID: "974996f7-bcd0-44da-8861-8d44792fe2b1"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.379895 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "974996f7-bcd0-44da-8861-8d44792fe2b1" (UID: "974996f7-bcd0-44da-8861-8d44792fe2b1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.425883 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "974996f7-bcd0-44da-8861-8d44792fe2b1" (UID: "974996f7-bcd0-44da-8861-8d44792fe2b1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.474673 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.474721 4823 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.474734 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/974996f7-bcd0-44da-8861-8d44792fe2b1-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.490988 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" event={"ID":"974996f7-bcd0-44da-8861-8d44792fe2b1","Type":"ContainerDied","Data":"7b1e0dbe92208bdf8cb0c3e3d4f97897fab86a372ee303bdf2f49ac464a07f3c"} Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.491048 4823 scope.go:117] "RemoveContainer" containerID="49d280773794424c372a89d2ec9985e3ec5154a1d1096fb9fe1af5d65f97c189" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.491043 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fbc59fbb7-zkxk8" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.493660 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"6a1fe764-ed5b-4457-bc99-78fa9b816588","Type":"ContainerStarted","Data":"c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180"} Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.511455 4823 scope.go:117] "RemoveContainer" containerID="654c35ebc025d54b2553acab4c92a8dd74bb0e9bb456add9e16843e352646b7e" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.531077 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.355653289 podStartE2EDuration="12.531055659s" podCreationTimestamp="2026-01-26 15:40:03 +0000 UTC" firstStartedPulling="2026-01-26 15:40:04.686544573 +0000 UTC m=+3201.372007678" lastFinishedPulling="2026-01-26 15:40:13.861946943 +0000 UTC m=+3210.547410048" observedRunningTime="2026-01-26 15:40:15.521907071 +0000 UTC m=+3212.207370186" watchObservedRunningTime="2026-01-26 15:40:15.531055659 +0000 UTC m=+3212.216518774" Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.579660 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zkxk8"] Jan 26 15:40:15 crc kubenswrapper[4823]: I0126 15:40:15.594323 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fbc59fbb7-zkxk8"] Jan 26 15:40:17 crc kubenswrapper[4823]: I0126 15:40:17.006661 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:40:17 crc kubenswrapper[4823]: I0126 15:40:17.006991 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="ceilometer-central-agent" containerID="cri-o://7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552" gracePeriod=30 Jan 26 15:40:17 crc kubenswrapper[4823]: I0126 15:40:17.007067 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="proxy-httpd" containerID="cri-o://01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9" gracePeriod=30 Jan 26 15:40:17 crc kubenswrapper[4823]: I0126 15:40:17.007122 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="ceilometer-notification-agent" containerID="cri-o://ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a" gracePeriod=30 Jan 26 15:40:17 crc kubenswrapper[4823]: I0126 15:40:17.007133 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="sg-core" containerID="cri-o://4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665" gracePeriod=30 Jan 26 15:40:17 crc kubenswrapper[4823]: I0126 15:40:17.518709 4823 generic.go:334] "Generic (PLEG): container finished" podID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerID="01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9" exitCode=0 Jan 26 15:40:17 crc kubenswrapper[4823]: I0126 15:40:17.519048 4823 generic.go:334] "Generic (PLEG): container finished" podID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerID="4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665" exitCode=2 Jan 26 15:40:17 crc kubenswrapper[4823]: I0126 15:40:17.518788 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49ef12-8848-4054-af2f-18d5f98522c8","Type":"ContainerDied","Data":"01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9"} Jan 26 15:40:17 crc kubenswrapper[4823]: I0126 15:40:17.519112 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49ef12-8848-4054-af2f-18d5f98522c8","Type":"ContainerDied","Data":"4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665"} Jan 26 15:40:17 crc kubenswrapper[4823]: I0126 15:40:17.572210 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="974996f7-bcd0-44da-8861-8d44792fe2b1" path="/var/lib/kubelet/pods/974996f7-bcd0-44da-8861-8d44792fe2b1/volumes" Jan 26 15:40:18 crc kubenswrapper[4823]: I0126 15:40:18.529766 4823 generic.go:334] "Generic (PLEG): container finished" podID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerID="7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552" exitCode=0 Jan 26 15:40:18 crc kubenswrapper[4823]: I0126 15:40:18.529834 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49ef12-8848-4054-af2f-18d5f98522c8","Type":"ContainerDied","Data":"7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552"} Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.335964 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.481112 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-scripts\") pod \"6c49ef12-8848-4054-af2f-18d5f98522c8\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.481207 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49ef12-8848-4054-af2f-18d5f98522c8-log-httpd\") pod \"6c49ef12-8848-4054-af2f-18d5f98522c8\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.481324 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwspt\" (UniqueName: \"kubernetes.io/projected/6c49ef12-8848-4054-af2f-18d5f98522c8-kube-api-access-gwspt\") pod \"6c49ef12-8848-4054-af2f-18d5f98522c8\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.481400 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-sg-core-conf-yaml\") pod \"6c49ef12-8848-4054-af2f-18d5f98522c8\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.481428 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-combined-ca-bundle\") pod \"6c49ef12-8848-4054-af2f-18d5f98522c8\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.481449 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-config-data\") pod \"6c49ef12-8848-4054-af2f-18d5f98522c8\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.481584 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-ceilometer-tls-certs\") pod \"6c49ef12-8848-4054-af2f-18d5f98522c8\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.481624 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49ef12-8848-4054-af2f-18d5f98522c8-run-httpd\") pod \"6c49ef12-8848-4054-af2f-18d5f98522c8\" (UID: \"6c49ef12-8848-4054-af2f-18d5f98522c8\") " Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.482463 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c49ef12-8848-4054-af2f-18d5f98522c8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6c49ef12-8848-4054-af2f-18d5f98522c8" (UID: "6c49ef12-8848-4054-af2f-18d5f98522c8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.482760 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c49ef12-8848-4054-af2f-18d5f98522c8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6c49ef12-8848-4054-af2f-18d5f98522c8" (UID: "6c49ef12-8848-4054-af2f-18d5f98522c8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.490688 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-scripts" (OuterVolumeSpecName: "scripts") pod "6c49ef12-8848-4054-af2f-18d5f98522c8" (UID: "6c49ef12-8848-4054-af2f-18d5f98522c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.490969 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c49ef12-8848-4054-af2f-18d5f98522c8-kube-api-access-gwspt" (OuterVolumeSpecName: "kube-api-access-gwspt") pod "6c49ef12-8848-4054-af2f-18d5f98522c8" (UID: "6c49ef12-8848-4054-af2f-18d5f98522c8"). InnerVolumeSpecName "kube-api-access-gwspt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.518131 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6c49ef12-8848-4054-af2f-18d5f98522c8" (UID: "6c49ef12-8848-4054-af2f-18d5f98522c8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.559898 4823 generic.go:334] "Generic (PLEG): container finished" podID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerID="ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a" exitCode=0 Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.559948 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49ef12-8848-4054-af2f-18d5f98522c8","Type":"ContainerDied","Data":"ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a"} Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.559983 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6c49ef12-8848-4054-af2f-18d5f98522c8","Type":"ContainerDied","Data":"cc6f3af804fa08666baf3a61ebfcba13097009f2c60fc380c4ccdbc5a779a914"} Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.560009 4823 scope.go:117] "RemoveContainer" containerID="01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.559978 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.577829 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "6c49ef12-8848-4054-af2f-18d5f98522c8" (UID: "6c49ef12-8848-4054-af2f-18d5f98522c8"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.584586 4823 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.584750 4823 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49ef12-8848-4054-af2f-18d5f98522c8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.584857 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.584975 4823 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6c49ef12-8848-4054-af2f-18d5f98522c8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.585086 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwspt\" (UniqueName: \"kubernetes.io/projected/6c49ef12-8848-4054-af2f-18d5f98522c8-kube-api-access-gwspt\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.585195 4823 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.595641 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-config-data" (OuterVolumeSpecName: "config-data") pod "6c49ef12-8848-4054-af2f-18d5f98522c8" (UID: "6c49ef12-8848-4054-af2f-18d5f98522c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.596078 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c49ef12-8848-4054-af2f-18d5f98522c8" (UID: "6c49ef12-8848-4054-af2f-18d5f98522c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.634215 4823 scope.go:117] "RemoveContainer" containerID="4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.658237 4823 scope.go:117] "RemoveContainer" containerID="ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.686624 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.686664 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c49ef12-8848-4054-af2f-18d5f98522c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.688211 4823 scope.go:117] "RemoveContainer" containerID="7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.710544 4823 scope.go:117] "RemoveContainer" containerID="01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9" Jan 26 15:40:20 crc kubenswrapper[4823]: E0126 15:40:20.710924 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9\": container with ID starting with 01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9 not found: ID does not exist" containerID="01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.710978 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9"} err="failed to get container status \"01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9\": rpc error: code = NotFound desc = could not find container \"01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9\": container with ID starting with 01d54c18884d1dcb9c5ab1f8fd06f3b7a949626f15a3e11ca79ec1653da57fb9 not found: ID does not exist" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.711012 4823 scope.go:117] "RemoveContainer" containerID="4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665" Jan 26 15:40:20 crc kubenswrapper[4823]: E0126 15:40:20.711942 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665\": container with ID starting with 4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665 not found: ID does not exist" containerID="4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.712014 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665"} err="failed to get container status \"4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665\": rpc error: code = NotFound desc = could not find container \"4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665\": container with ID starting with 4c43d6b1b4e1fa3b23b27b4f98f43e32b09cea78fb572cbbfaa21dc3b3a60665 not found: ID does not exist" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.712051 4823 scope.go:117] "RemoveContainer" containerID="ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a" Jan 26 15:40:20 crc kubenswrapper[4823]: E0126 15:40:20.712480 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a\": container with ID starting with ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a not found: ID does not exist" containerID="ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.712514 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a"} err="failed to get container status \"ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a\": rpc error: code = NotFound desc = could not find container \"ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a\": container with ID starting with ff938c179e0cff59f9d6650127f3b667b3771c2d7a1ced6737592f5763baac4a not found: ID does not exist" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.712537 4823 scope.go:117] "RemoveContainer" containerID="7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552" Jan 26 15:40:20 crc kubenswrapper[4823]: E0126 15:40:20.712804 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552\": container with ID starting with 7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552 not found: ID does not exist" containerID="7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.712830 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552"} err="failed to get container status \"7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552\": rpc error: code = NotFound desc = could not find container \"7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552\": container with ID starting with 7542362b95327ac9716bf6151c8e0babb6b92f8fde50559d4e0a64836b24f552 not found: ID does not exist" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.895076 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.907896 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.923479 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:40:20 crc kubenswrapper[4823]: E0126 15:40:20.923985 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="proxy-httpd" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.924010 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="proxy-httpd" Jan 26 15:40:20 crc kubenswrapper[4823]: E0126 15:40:20.924038 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="sg-core" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.924046 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="sg-core" Jan 26 15:40:20 crc kubenswrapper[4823]: E0126 15:40:20.924074 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="974996f7-bcd0-44da-8861-8d44792fe2b1" containerName="dnsmasq-dns" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.924082 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="974996f7-bcd0-44da-8861-8d44792fe2b1" containerName="dnsmasq-dns" Jan 26 15:40:20 crc kubenswrapper[4823]: E0126 15:40:20.924101 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="ceilometer-notification-agent" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.924110 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="ceilometer-notification-agent" Jan 26 15:40:20 crc kubenswrapper[4823]: E0126 15:40:20.924124 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="ceilometer-central-agent" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.924131 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="ceilometer-central-agent" Jan 26 15:40:20 crc kubenswrapper[4823]: E0126 15:40:20.924143 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="974996f7-bcd0-44da-8861-8d44792fe2b1" containerName="init" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.924150 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="974996f7-bcd0-44da-8861-8d44792fe2b1" containerName="init" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.924337 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="ceilometer-notification-agent" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.924357 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="sg-core" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.924386 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="974996f7-bcd0-44da-8861-8d44792fe2b1" containerName="dnsmasq-dns" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.924401 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="ceilometer-central-agent" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.924412 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" containerName="proxy-httpd" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.926439 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.929517 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.929717 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.931020 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.946820 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.993834 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-config-data\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.993898 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.993939 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.993974 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953ca111-757e-44e8-9f00-1b4576cb4b3c-log-httpd\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.993992 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.994029 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hplgd\" (UniqueName: \"kubernetes.io/projected/953ca111-757e-44e8-9f00-1b4576cb4b3c-kube-api-access-hplgd\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.994058 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953ca111-757e-44e8-9f00-1b4576cb4b3c-run-httpd\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:20 crc kubenswrapper[4823]: I0126 15:40:20.994087 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-scripts\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.095522 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-config-data\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.095833 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.095974 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.096125 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953ca111-757e-44e8-9f00-1b4576cb4b3c-log-httpd\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.096248 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.096396 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hplgd\" (UniqueName: \"kubernetes.io/projected/953ca111-757e-44e8-9f00-1b4576cb4b3c-kube-api-access-hplgd\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.096549 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953ca111-757e-44e8-9f00-1b4576cb4b3c-run-httpd\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.096626 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953ca111-757e-44e8-9f00-1b4576cb4b3c-log-httpd\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.096752 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-scripts\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.097031 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/953ca111-757e-44e8-9f00-1b4576cb4b3c-run-httpd\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.100385 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.102926 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.103428 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-config-data\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.103709 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-scripts\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.104694 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/953ca111-757e-44e8-9f00-1b4576cb4b3c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.116911 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hplgd\" (UniqueName: \"kubernetes.io/projected/953ca111-757e-44e8-9f00-1b4576cb4b3c-kube-api-access-hplgd\") pod \"ceilometer-0\" (UID: \"953ca111-757e-44e8-9f00-1b4576cb4b3c\") " pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.243146 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.572413 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c49ef12-8848-4054-af2f-18d5f98522c8" path="/var/lib/kubelet/pods/6c49ef12-8848-4054-af2f-18d5f98522c8/volumes" Jan 26 15:40:21 crc kubenswrapper[4823]: W0126 15:40:21.786659 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod953ca111_757e_44e8_9f00_1b4576cb4b3c.slice/crio-c7e779cdb781140823bdc9796a7cb4942031197e299acdfb973d200e060d92bc WatchSource:0}: Error finding container c7e779cdb781140823bdc9796a7cb4942031197e299acdfb973d200e060d92bc: Status 404 returned error can't find the container with id c7e779cdb781140823bdc9796a7cb4942031197e299acdfb973d200e060d92bc Jan 26 15:40:21 crc kubenswrapper[4823]: I0126 15:40:21.788170 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 26 15:40:22 crc kubenswrapper[4823]: I0126 15:40:22.560425 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:40:22 crc kubenswrapper[4823]: E0126 15:40:22.560999 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:40:22 crc kubenswrapper[4823]: I0126 15:40:22.583746 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953ca111-757e-44e8-9f00-1b4576cb4b3c","Type":"ContainerStarted","Data":"c7e779cdb781140823bdc9796a7cb4942031197e299acdfb973d200e060d92bc"} Jan 26 15:40:23 crc kubenswrapper[4823]: I0126 15:40:23.601917 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953ca111-757e-44e8-9f00-1b4576cb4b3c","Type":"ContainerStarted","Data":"90618cf9746e520f6822fde308dbafda564a0fb2e18990eda1e6c7d304d99026"} Jan 26 15:40:23 crc kubenswrapper[4823]: I0126 15:40:23.976730 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 26 15:40:24 crc kubenswrapper[4823]: I0126 15:40:24.623339 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953ca111-757e-44e8-9f00-1b4576cb4b3c","Type":"ContainerStarted","Data":"aa43247e4f4c8f588109dbb3bf41e13f808f9ffbbda33b6a3eaf4362a18ebe50"} Jan 26 15:40:25 crc kubenswrapper[4823]: I0126 15:40:25.635174 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953ca111-757e-44e8-9f00-1b4576cb4b3c","Type":"ContainerStarted","Data":"2ba7392adf1687276c4cc4e5df77461956017841608a29926ae10e2a075289b3"} Jan 26 15:40:25 crc kubenswrapper[4823]: I0126 15:40:25.835112 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 26 15:40:25 crc kubenswrapper[4823]: I0126 15:40:25.897413 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 15:40:26 crc kubenswrapper[4823]: I0126 15:40:26.646295 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="83770651-fa4d-4cf4-b39c-0e09f0658a3f" containerName="manila-scheduler" containerID="cri-o://edb973cf8247d86e16a1b340e1950dbe24d156e4bd3be33e0127003e4995d237" gracePeriod=30 Jan 26 15:40:26 crc kubenswrapper[4823]: I0126 15:40:26.646783 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="83770651-fa4d-4cf4-b39c-0e09f0658a3f" containerName="probe" containerID="cri-o://d465f257e25eaa57d623eed90d1daf25b5bdee1c10c38a8fa120b7821d6eaf3d" gracePeriod=30 Jan 26 15:40:27 crc kubenswrapper[4823]: I0126 15:40:27.656812 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953ca111-757e-44e8-9f00-1b4576cb4b3c","Type":"ContainerStarted","Data":"5ba5d613e23b2a5fda0fab496cb7371928a1c81f01d63acaf3c85fee639d8e52"} Jan 26 15:40:27 crc kubenswrapper[4823]: I0126 15:40:27.657345 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 26 15:40:27 crc kubenswrapper[4823]: I0126 15:40:27.659425 4823 generic.go:334] "Generic (PLEG): container finished" podID="83770651-fa4d-4cf4-b39c-0e09f0658a3f" containerID="d465f257e25eaa57d623eed90d1daf25b5bdee1c10c38a8fa120b7821d6eaf3d" exitCode=0 Jan 26 15:40:27 crc kubenswrapper[4823]: I0126 15:40:27.659456 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"83770651-fa4d-4cf4-b39c-0e09f0658a3f","Type":"ContainerDied","Data":"d465f257e25eaa57d623eed90d1daf25b5bdee1c10c38a8fa120b7821d6eaf3d"} Jan 26 15:40:27 crc kubenswrapper[4823]: I0126 15:40:27.695737 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.89500426 podStartE2EDuration="7.695715053s" podCreationTimestamp="2026-01-26 15:40:20 +0000 UTC" firstStartedPulling="2026-01-26 15:40:21.789598126 +0000 UTC m=+3218.475061231" lastFinishedPulling="2026-01-26 15:40:26.590308919 +0000 UTC m=+3223.275772024" observedRunningTime="2026-01-26 15:40:27.692080775 +0000 UTC m=+3224.377543920" watchObservedRunningTime="2026-01-26 15:40:27.695715053 +0000 UTC m=+3224.381178178" Jan 26 15:40:30 crc kubenswrapper[4823]: I0126 15:40:30.208218 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Jan 26 15:40:31 crc kubenswrapper[4823]: I0126 15:40:31.708715 4823 generic.go:334] "Generic (PLEG): container finished" podID="83770651-fa4d-4cf4-b39c-0e09f0658a3f" containerID="edb973cf8247d86e16a1b340e1950dbe24d156e4bd3be33e0127003e4995d237" exitCode=0 Jan 26 15:40:31 crc kubenswrapper[4823]: I0126 15:40:31.708806 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"83770651-fa4d-4cf4-b39c-0e09f0658a3f","Type":"ContainerDied","Data":"edb973cf8247d86e16a1b340e1950dbe24d156e4bd3be33e0127003e4995d237"} Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.181417 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.307643 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-combined-ca-bundle\") pod \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.307723 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-config-data\") pod \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.307765 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4bln\" (UniqueName: \"kubernetes.io/projected/83770651-fa4d-4cf4-b39c-0e09f0658a3f-kube-api-access-t4bln\") pod \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.307819 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-config-data-custom\") pod \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.307883 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83770651-fa4d-4cf4-b39c-0e09f0658a3f-etc-machine-id\") pod \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.307925 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-scripts\") pod \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\" (UID: \"83770651-fa4d-4cf4-b39c-0e09f0658a3f\") " Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.307970 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83770651-fa4d-4cf4-b39c-0e09f0658a3f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "83770651-fa4d-4cf4-b39c-0e09f0658a3f" (UID: "83770651-fa4d-4cf4-b39c-0e09f0658a3f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.308426 4823 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/83770651-fa4d-4cf4-b39c-0e09f0658a3f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.312931 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-scripts" (OuterVolumeSpecName: "scripts") pod "83770651-fa4d-4cf4-b39c-0e09f0658a3f" (UID: "83770651-fa4d-4cf4-b39c-0e09f0658a3f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.313380 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83770651-fa4d-4cf4-b39c-0e09f0658a3f-kube-api-access-t4bln" (OuterVolumeSpecName: "kube-api-access-t4bln") pod "83770651-fa4d-4cf4-b39c-0e09f0658a3f" (UID: "83770651-fa4d-4cf4-b39c-0e09f0658a3f"). InnerVolumeSpecName "kube-api-access-t4bln". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.320599 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "83770651-fa4d-4cf4-b39c-0e09f0658a3f" (UID: "83770651-fa4d-4cf4-b39c-0e09f0658a3f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.382593 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83770651-fa4d-4cf4-b39c-0e09f0658a3f" (UID: "83770651-fa4d-4cf4-b39c-0e09f0658a3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.410551 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.410586 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4bln\" (UniqueName: \"kubernetes.io/projected/83770651-fa4d-4cf4-b39c-0e09f0658a3f-kube-api-access-t4bln\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.410597 4823 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.410607 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.412074 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-config-data" (OuterVolumeSpecName: "config-data") pod "83770651-fa4d-4cf4-b39c-0e09f0658a3f" (UID: "83770651-fa4d-4cf4-b39c-0e09f0658a3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.512313 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83770651-fa4d-4cf4-b39c-0e09f0658a3f-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.721620 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"83770651-fa4d-4cf4-b39c-0e09f0658a3f","Type":"ContainerDied","Data":"86948ec6ed05bb806e8f71ba839220167b51dd8bd993f0dc84cffcdcaaa293ea"} Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.722866 4823 scope.go:117] "RemoveContainer" containerID="d465f257e25eaa57d623eed90d1daf25b5bdee1c10c38a8fa120b7821d6eaf3d" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.721680 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.757265 4823 scope.go:117] "RemoveContainer" containerID="edb973cf8247d86e16a1b340e1950dbe24d156e4bd3be33e0127003e4995d237" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.774775 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.818012 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.830353 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 15:40:32 crc kubenswrapper[4823]: E0126 15:40:32.830862 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83770651-fa4d-4cf4-b39c-0e09f0658a3f" containerName="manila-scheduler" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.830891 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="83770651-fa4d-4cf4-b39c-0e09f0658a3f" containerName="manila-scheduler" Jan 26 15:40:32 crc kubenswrapper[4823]: E0126 15:40:32.830925 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83770651-fa4d-4cf4-b39c-0e09f0658a3f" containerName="probe" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.830932 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="83770651-fa4d-4cf4-b39c-0e09f0658a3f" containerName="probe" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.831273 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="83770651-fa4d-4cf4-b39c-0e09f0658a3f" containerName="manila-scheduler" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.831286 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="83770651-fa4d-4cf4-b39c-0e09f0658a3f" containerName="probe" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.832276 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.835357 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.850569 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.924144 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3022a53-0ff5-4e22-9229-9747a29daac9-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.924203 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3022a53-0ff5-4e22-9229-9747a29daac9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.924288 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3022a53-0ff5-4e22-9229-9747a29daac9-config-data\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.924671 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3022a53-0ff5-4e22-9229-9747a29daac9-scripts\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.924996 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzdh4\" (UniqueName: \"kubernetes.io/projected/a3022a53-0ff5-4e22-9229-9747a29daac9-kube-api-access-zzdh4\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:32 crc kubenswrapper[4823]: I0126 15:40:32.925058 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3022a53-0ff5-4e22-9229-9747a29daac9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.026663 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3022a53-0ff5-4e22-9229-9747a29daac9-scripts\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.026766 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzdh4\" (UniqueName: \"kubernetes.io/projected/a3022a53-0ff5-4e22-9229-9747a29daac9-kube-api-access-zzdh4\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.026795 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3022a53-0ff5-4e22-9229-9747a29daac9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.026827 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3022a53-0ff5-4e22-9229-9747a29daac9-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.026843 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3022a53-0ff5-4e22-9229-9747a29daac9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.026899 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3022a53-0ff5-4e22-9229-9747a29daac9-config-data\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.027460 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a3022a53-0ff5-4e22-9229-9747a29daac9-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.031769 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3022a53-0ff5-4e22-9229-9747a29daac9-scripts\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.032254 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a3022a53-0ff5-4e22-9229-9747a29daac9-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.039553 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3022a53-0ff5-4e22-9229-9747a29daac9-config-data\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.041027 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3022a53-0ff5-4e22-9229-9747a29daac9-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.053237 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzdh4\" (UniqueName: \"kubernetes.io/projected/a3022a53-0ff5-4e22-9229-9747a29daac9-kube-api-access-zzdh4\") pod \"manila-scheduler-0\" (UID: \"a3022a53-0ff5-4e22-9229-9747a29daac9\") " pod="openstack/manila-scheduler-0" Jan 26 15:40:33 crc kubenswrapper[4823]: I0126 15:40:33.153952 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Jan 26 15:40:34 crc kubenswrapper[4823]: I0126 15:40:33.579756 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83770651-fa4d-4cf4-b39c-0e09f0658a3f" path="/var/lib/kubelet/pods/83770651-fa4d-4cf4-b39c-0e09f0658a3f/volumes" Jan 26 15:40:34 crc kubenswrapper[4823]: I0126 15:40:33.580671 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:40:34 crc kubenswrapper[4823]: E0126 15:40:33.580898 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:40:34 crc kubenswrapper[4823]: I0126 15:40:33.587492 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Jan 26 15:40:34 crc kubenswrapper[4823]: I0126 15:40:33.743046 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a3022a53-0ff5-4e22-9229-9747a29daac9","Type":"ContainerStarted","Data":"19109a025a543f23cddd837bc792789bc331872c1b29de56e717d1cba11ad76d"} Jan 26 15:40:34 crc kubenswrapper[4823]: I0126 15:40:34.755936 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a3022a53-0ff5-4e22-9229-9747a29daac9","Type":"ContainerStarted","Data":"e95f7bcdeb3684d1cc02143da3a9cd24229090df43e7cf2cf5aa6bd16e868cf7"} Jan 26 15:40:34 crc kubenswrapper[4823]: I0126 15:40:34.756475 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"a3022a53-0ff5-4e22-9229-9747a29daac9","Type":"ContainerStarted","Data":"15323d37adcd8f53177c6d9ab3c0082b3329b9bb3b93737d1244c6fb1eab9d32"} Jan 26 15:40:34 crc kubenswrapper[4823]: I0126 15:40:34.781254 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.781238794 podStartE2EDuration="2.781238794s" podCreationTimestamp="2026-01-26 15:40:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:40:34.776465234 +0000 UTC m=+3231.461928359" watchObservedRunningTime="2026-01-26 15:40:34.781238794 +0000 UTC m=+3231.466701899" Jan 26 15:40:35 crc kubenswrapper[4823]: I0126 15:40:35.673481 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 26 15:40:35 crc kubenswrapper[4823]: I0126 15:40:35.742214 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 15:40:35 crc kubenswrapper[4823]: I0126 15:40:35.763622 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="6a1fe764-ed5b-4457-bc99-78fa9b816588" containerName="manila-share" containerID="cri-o://0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8" gracePeriod=30 Jan 26 15:40:35 crc kubenswrapper[4823]: I0126 15:40:35.763827 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="6a1fe764-ed5b-4457-bc99-78fa9b816588" containerName="probe" containerID="cri-o://c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180" gracePeriod=30 Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.773447 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.775383 4823 generic.go:334] "Generic (PLEG): container finished" podID="6a1fe764-ed5b-4457-bc99-78fa9b816588" containerID="c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180" exitCode=0 Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.775418 4823 generic.go:334] "Generic (PLEG): container finished" podID="6a1fe764-ed5b-4457-bc99-78fa9b816588" containerID="0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8" exitCode=1 Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.775470 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"6a1fe764-ed5b-4457-bc99-78fa9b816588","Type":"ContainerDied","Data":"c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180"} Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.775502 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"6a1fe764-ed5b-4457-bc99-78fa9b816588","Type":"ContainerDied","Data":"0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8"} Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.775517 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"6a1fe764-ed5b-4457-bc99-78fa9b816588","Type":"ContainerDied","Data":"dde2799f456b53dfd449e79ed7d5fa36df6a4a98ba4062e384bb3ddc76f72d83"} Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.775536 4823 scope.go:117] "RemoveContainer" containerID="c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.809119 4823 scope.go:117] "RemoveContainer" containerID="0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.821390 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/6a1fe764-ed5b-4457-bc99-78fa9b816588-var-lib-manila\") pod \"6a1fe764-ed5b-4457-bc99-78fa9b816588\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.821466 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-scripts\") pod \"6a1fe764-ed5b-4457-bc99-78fa9b816588\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.821533 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-combined-ca-bundle\") pod \"6a1fe764-ed5b-4457-bc99-78fa9b816588\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.821544 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1fe764-ed5b-4457-bc99-78fa9b816588-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "6a1fe764-ed5b-4457-bc99-78fa9b816588" (UID: "6a1fe764-ed5b-4457-bc99-78fa9b816588"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.821624 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nhwb\" (UniqueName: \"kubernetes.io/projected/6a1fe764-ed5b-4457-bc99-78fa9b816588-kube-api-access-6nhwb\") pod \"6a1fe764-ed5b-4457-bc99-78fa9b816588\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.821690 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-config-data-custom\") pod \"6a1fe764-ed5b-4457-bc99-78fa9b816588\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.821772 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a1fe764-ed5b-4457-bc99-78fa9b816588-etc-machine-id\") pod \"6a1fe764-ed5b-4457-bc99-78fa9b816588\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.821845 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6a1fe764-ed5b-4457-bc99-78fa9b816588-ceph\") pod \"6a1fe764-ed5b-4457-bc99-78fa9b816588\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.821869 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-config-data\") pod \"6a1fe764-ed5b-4457-bc99-78fa9b816588\" (UID: \"6a1fe764-ed5b-4457-bc99-78fa9b816588\") " Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.822521 4823 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/6a1fe764-ed5b-4457-bc99-78fa9b816588-var-lib-manila\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.822694 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a1fe764-ed5b-4457-bc99-78fa9b816588-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6a1fe764-ed5b-4457-bc99-78fa9b816588" (UID: "6a1fe764-ed5b-4457-bc99-78fa9b816588"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.829048 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6a1fe764-ed5b-4457-bc99-78fa9b816588" (UID: "6a1fe764-ed5b-4457-bc99-78fa9b816588"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.829227 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-scripts" (OuterVolumeSpecName: "scripts") pod "6a1fe764-ed5b-4457-bc99-78fa9b816588" (UID: "6a1fe764-ed5b-4457-bc99-78fa9b816588"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.830268 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1fe764-ed5b-4457-bc99-78fa9b816588-kube-api-access-6nhwb" (OuterVolumeSpecName: "kube-api-access-6nhwb") pod "6a1fe764-ed5b-4457-bc99-78fa9b816588" (UID: "6a1fe764-ed5b-4457-bc99-78fa9b816588"). InnerVolumeSpecName "kube-api-access-6nhwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.831150 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a1fe764-ed5b-4457-bc99-78fa9b816588-ceph" (OuterVolumeSpecName: "ceph") pod "6a1fe764-ed5b-4457-bc99-78fa9b816588" (UID: "6a1fe764-ed5b-4457-bc99-78fa9b816588"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.836681 4823 scope.go:117] "RemoveContainer" containerID="c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180" Jan 26 15:40:36 crc kubenswrapper[4823]: E0126 15:40:36.837933 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180\": container with ID starting with c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180 not found: ID does not exist" containerID="c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.838179 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180"} err="failed to get container status \"c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180\": rpc error: code = NotFound desc = could not find container \"c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180\": container with ID starting with c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180 not found: ID does not exist" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.838296 4823 scope.go:117] "RemoveContainer" containerID="0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8" Jan 26 15:40:36 crc kubenswrapper[4823]: E0126 15:40:36.838961 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8\": container with ID starting with 0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8 not found: ID does not exist" containerID="0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.839085 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8"} err="failed to get container status \"0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8\": rpc error: code = NotFound desc = could not find container \"0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8\": container with ID starting with 0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8 not found: ID does not exist" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.839194 4823 scope.go:117] "RemoveContainer" containerID="c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.845024 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180"} err="failed to get container status \"c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180\": rpc error: code = NotFound desc = could not find container \"c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180\": container with ID starting with c820790d91a6d3df73e9a321a9d7e7914994369442623cf0fba78d3d38324180 not found: ID does not exist" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.845086 4823 scope.go:117] "RemoveContainer" containerID="0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.845626 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8"} err="failed to get container status \"0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8\": rpc error: code = NotFound desc = could not find container \"0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8\": container with ID starting with 0df77535486fa9abaade6cfb1c221ecc083fada1cc17382bb6ffce7f55c67cd8 not found: ID does not exist" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.932926 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6nhwb\" (UniqueName: \"kubernetes.io/projected/6a1fe764-ed5b-4457-bc99-78fa9b816588-kube-api-access-6nhwb\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.932981 4823 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.932993 4823 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a1fe764-ed5b-4457-bc99-78fa9b816588-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.933005 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/6a1fe764-ed5b-4457-bc99-78fa9b816588-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.933023 4823 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-scripts\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.976594 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-config-data" (OuterVolumeSpecName: "config-data") pod "6a1fe764-ed5b-4457-bc99-78fa9b816588" (UID: "6a1fe764-ed5b-4457-bc99-78fa9b816588"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:36 crc kubenswrapper[4823]: I0126 15:40:36.993993 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a1fe764-ed5b-4457-bc99-78fa9b816588" (UID: "6a1fe764-ed5b-4457-bc99-78fa9b816588"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.034870 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.035186 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a1fe764-ed5b-4457-bc99-78fa9b816588-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.786553 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.817789 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.827628 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.856567 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 15:40:37 crc kubenswrapper[4823]: E0126 15:40:37.857061 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1fe764-ed5b-4457-bc99-78fa9b816588" containerName="manila-share" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.857092 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1fe764-ed5b-4457-bc99-78fa9b816588" containerName="manila-share" Jan 26 15:40:37 crc kubenswrapper[4823]: E0126 15:40:37.857108 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a1fe764-ed5b-4457-bc99-78fa9b816588" containerName="probe" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.857117 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a1fe764-ed5b-4457-bc99-78fa9b816588" containerName="probe" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.857378 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1fe764-ed5b-4457-bc99-78fa9b816588" containerName="probe" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.857397 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a1fe764-ed5b-4457-bc99-78fa9b816588" containerName="manila-share" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.858697 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.860943 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.867924 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.956671 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27067c33-cf62-4e6d-9f91-7c1867d0b195-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.956728 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/27067c33-cf62-4e6d-9f91-7c1867d0b195-ceph\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.957134 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27067c33-cf62-4e6d-9f91-7c1867d0b195-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.957228 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27067c33-cf62-4e6d-9f91-7c1867d0b195-scripts\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.957510 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/27067c33-cf62-4e6d-9f91-7c1867d0b195-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.957557 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27067c33-cf62-4e6d-9f91-7c1867d0b195-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.957593 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27067c33-cf62-4e6d-9f91-7c1867d0b195-config-data\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:37 crc kubenswrapper[4823]: I0126 15:40:37.957660 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csfxl\" (UniqueName: \"kubernetes.io/projected/27067c33-cf62-4e6d-9f91-7c1867d0b195-kube-api-access-csfxl\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.059598 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/27067c33-cf62-4e6d-9f91-7c1867d0b195-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.059667 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27067c33-cf62-4e6d-9f91-7c1867d0b195-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.059690 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27067c33-cf62-4e6d-9f91-7c1867d0b195-config-data\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.059710 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csfxl\" (UniqueName: \"kubernetes.io/projected/27067c33-cf62-4e6d-9f91-7c1867d0b195-kube-api-access-csfxl\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.059754 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27067c33-cf62-4e6d-9f91-7c1867d0b195-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.059761 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/27067c33-cf62-4e6d-9f91-7c1867d0b195-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.059772 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/27067c33-cf62-4e6d-9f91-7c1867d0b195-ceph\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.060133 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27067c33-cf62-4e6d-9f91-7c1867d0b195-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.060181 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27067c33-cf62-4e6d-9f91-7c1867d0b195-scripts\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.060297 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/27067c33-cf62-4e6d-9f91-7c1867d0b195-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.066550 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/27067c33-cf62-4e6d-9f91-7c1867d0b195-ceph\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.066579 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/27067c33-cf62-4e6d-9f91-7c1867d0b195-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.066585 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27067c33-cf62-4e6d-9f91-7c1867d0b195-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.068231 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27067c33-cf62-4e6d-9f91-7c1867d0b195-scripts\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.080833 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27067c33-cf62-4e6d-9f91-7c1867d0b195-config-data\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.082357 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csfxl\" (UniqueName: \"kubernetes.io/projected/27067c33-cf62-4e6d-9f91-7c1867d0b195-kube-api-access-csfxl\") pod \"manila-share-share1-0\" (UID: \"27067c33-cf62-4e6d-9f91-7c1867d0b195\") " pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.185831 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.777780 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Jan 26 15:40:38 crc kubenswrapper[4823]: I0126 15:40:38.799204 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"27067c33-cf62-4e6d-9f91-7c1867d0b195","Type":"ContainerStarted","Data":"c3f64939331179e63a589164cb9065e72276c7b3fcd0a9f67c3bb77c33309ecb"} Jan 26 15:40:39 crc kubenswrapper[4823]: I0126 15:40:39.578899 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a1fe764-ed5b-4457-bc99-78fa9b816588" path="/var/lib/kubelet/pods/6a1fe764-ed5b-4457-bc99-78fa9b816588/volumes" Jan 26 15:40:39 crc kubenswrapper[4823]: I0126 15:40:39.825212 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"27067c33-cf62-4e6d-9f91-7c1867d0b195","Type":"ContainerStarted","Data":"a1bcc8a8fb2bb54ea5dac6d9ae5d4d229cfcd9e168b5b56185f7a2c22b74a01c"} Jan 26 15:40:39 crc kubenswrapper[4823]: I0126 15:40:39.825572 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"27067c33-cf62-4e6d-9f91-7c1867d0b195","Type":"ContainerStarted","Data":"4fa7478ca332c0c8b92ac23c5f45d27aee2d7cfc12b4b5351da4a398c72c66fc"} Jan 26 15:40:39 crc kubenswrapper[4823]: I0126 15:40:39.851792 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=2.851771476 podStartE2EDuration="2.851771476s" podCreationTimestamp="2026-01-26 15:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 15:40:39.84197476 +0000 UTC m=+3236.527437865" watchObservedRunningTime="2026-01-26 15:40:39.851771476 +0000 UTC m=+3236.537234581" Jan 26 15:40:43 crc kubenswrapper[4823]: I0126 15:40:43.155594 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Jan 26 15:40:44 crc kubenswrapper[4823]: I0126 15:40:44.973423 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Jan 26 15:40:45 crc kubenswrapper[4823]: I0126 15:40:45.560538 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:40:45 crc kubenswrapper[4823]: E0126 15:40:45.561152 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:40:48 crc kubenswrapper[4823]: I0126 15:40:48.186399 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Jan 26 15:40:51 crc kubenswrapper[4823]: I0126 15:40:51.253841 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 26 15:40:56 crc kubenswrapper[4823]: I0126 15:40:56.560538 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:40:56 crc kubenswrapper[4823]: E0126 15:40:56.561323 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:40:59 crc kubenswrapper[4823]: I0126 15:40:59.990041 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Jan 26 15:41:08 crc kubenswrapper[4823]: I0126 15:41:08.560654 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:41:08 crc kubenswrapper[4823]: E0126 15:41:08.561430 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:41:20 crc kubenswrapper[4823]: I0126 15:41:20.009924 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:41:20 crc kubenswrapper[4823]: E0126 15:41:20.011134 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:41:34 crc kubenswrapper[4823]: I0126 15:41:34.565621 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:41:34 crc kubenswrapper[4823]: E0126 15:41:34.567017 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:41:47 crc kubenswrapper[4823]: I0126 15:41:47.561868 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:41:47 crc kubenswrapper[4823]: E0126 15:41:47.563094 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.506853 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.509487 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.512653 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.512740 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hmpcx" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.516950 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.522177 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.535734 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.593903 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhbwz\" (UniqueName: \"kubernetes.io/projected/1529ef7b-113d-479f-b4b7-d134a51539e3-kube-api-access-zhbwz\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.593944 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.593961 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.594000 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.594014 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.594153 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1529ef7b-113d-479f-b4b7-d134a51539e3-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.595086 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1529ef7b-113d-479f-b4b7-d134a51539e3-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.595131 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1529ef7b-113d-479f-b4b7-d134a51539e3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.595196 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.595566 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1529ef7b-113d-479f-b4b7-d134a51539e3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.697619 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1529ef7b-113d-479f-b4b7-d134a51539e3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.697781 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhbwz\" (UniqueName: \"kubernetes.io/projected/1529ef7b-113d-479f-b4b7-d134a51539e3-kube-api-access-zhbwz\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.697809 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.697834 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.697873 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.697894 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.697941 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1529ef7b-113d-479f-b4b7-d134a51539e3-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.697969 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1529ef7b-113d-479f-b4b7-d134a51539e3-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.697995 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1529ef7b-113d-479f-b4b7-d134a51539e3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.698040 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.698207 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1529ef7b-113d-479f-b4b7-d134a51539e3-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.698238 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.700808 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1529ef7b-113d-479f-b4b7-d134a51539e3-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.702019 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1529ef7b-113d-479f-b4b7-d134a51539e3-openstack-config\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.704097 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ceph\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.705341 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1529ef7b-113d-479f-b4b7-d134a51539e3-config-data\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.714068 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ssh-key\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.714120 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-openstack-config-secret\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.715251 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ca-certs\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.716649 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhbwz\" (UniqueName: \"kubernetes.io/projected/1529ef7b-113d-479f-b4b7-d134a51539e3-kube-api-access-zhbwz\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.736725 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s00-full\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:54 crc kubenswrapper[4823]: I0126 15:41:54.837973 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Jan 26 15:41:55 crc kubenswrapper[4823]: I0126 15:41:55.378313 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-full"] Jan 26 15:41:55 crc kubenswrapper[4823]: I0126 15:41:55.497459 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"1529ef7b-113d-479f-b4b7-d134a51539e3","Type":"ContainerStarted","Data":"b2616dbd7453a5e3069c71c413b7dfc576781506c2fa5bf01a6fd56a9fe74294"} Jan 26 15:42:00 crc kubenswrapper[4823]: I0126 15:42:00.560469 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:42:00 crc kubenswrapper[4823]: E0126 15:42:00.561513 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:42:14 crc kubenswrapper[4823]: I0126 15:42:14.560769 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:42:14 crc kubenswrapper[4823]: E0126 15:42:14.565661 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.072614 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fm2dd"] Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.075284 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.091085 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fm2dd"] Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.138226 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b608726d-f21a-4421-a3f8-494fb9ea4de5-utilities\") pod \"redhat-operators-fm2dd\" (UID: \"b608726d-f21a-4421-a3f8-494fb9ea4de5\") " pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.138493 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2lr7\" (UniqueName: \"kubernetes.io/projected/b608726d-f21a-4421-a3f8-494fb9ea4de5-kube-api-access-f2lr7\") pod \"redhat-operators-fm2dd\" (UID: \"b608726d-f21a-4421-a3f8-494fb9ea4de5\") " pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.138649 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b608726d-f21a-4421-a3f8-494fb9ea4de5-catalog-content\") pod \"redhat-operators-fm2dd\" (UID: \"b608726d-f21a-4421-a3f8-494fb9ea4de5\") " pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.239881 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b608726d-f21a-4421-a3f8-494fb9ea4de5-catalog-content\") pod \"redhat-operators-fm2dd\" (UID: \"b608726d-f21a-4421-a3f8-494fb9ea4de5\") " pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.239982 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b608726d-f21a-4421-a3f8-494fb9ea4de5-utilities\") pod \"redhat-operators-fm2dd\" (UID: \"b608726d-f21a-4421-a3f8-494fb9ea4de5\") " pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.240063 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2lr7\" (UniqueName: \"kubernetes.io/projected/b608726d-f21a-4421-a3f8-494fb9ea4de5-kube-api-access-f2lr7\") pod \"redhat-operators-fm2dd\" (UID: \"b608726d-f21a-4421-a3f8-494fb9ea4de5\") " pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.240819 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b608726d-f21a-4421-a3f8-494fb9ea4de5-catalog-content\") pod \"redhat-operators-fm2dd\" (UID: \"b608726d-f21a-4421-a3f8-494fb9ea4de5\") " pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.241027 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b608726d-f21a-4421-a3f8-494fb9ea4de5-utilities\") pod \"redhat-operators-fm2dd\" (UID: \"b608726d-f21a-4421-a3f8-494fb9ea4de5\") " pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.265213 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2lr7\" (UniqueName: \"kubernetes.io/projected/b608726d-f21a-4421-a3f8-494fb9ea4de5-kube-api-access-f2lr7\") pod \"redhat-operators-fm2dd\" (UID: \"b608726d-f21a-4421-a3f8-494fb9ea4de5\") " pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:20 crc kubenswrapper[4823]: I0126 15:42:20.472081 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:26 crc kubenswrapper[4823]: I0126 15:42:26.737645 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mgnj2"] Jan 26 15:42:26 crc kubenswrapper[4823]: I0126 15:42:26.741451 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:26 crc kubenswrapper[4823]: I0126 15:42:26.746684 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgnj2"] Jan 26 15:42:26 crc kubenswrapper[4823]: I0126 15:42:26.780122 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0da8dc-3c65-47e8-aebe-d2656be6fede-catalog-content\") pod \"redhat-marketplace-mgnj2\" (UID: \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\") " pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:26 crc kubenswrapper[4823]: I0126 15:42:26.780202 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcwz2\" (UniqueName: \"kubernetes.io/projected/3e0da8dc-3c65-47e8-aebe-d2656be6fede-kube-api-access-dcwz2\") pod \"redhat-marketplace-mgnj2\" (UID: \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\") " pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:26 crc kubenswrapper[4823]: I0126 15:42:26.780236 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0da8dc-3c65-47e8-aebe-d2656be6fede-utilities\") pod \"redhat-marketplace-mgnj2\" (UID: \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\") " pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:26 crc kubenswrapper[4823]: I0126 15:42:26.881566 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcwz2\" (UniqueName: \"kubernetes.io/projected/3e0da8dc-3c65-47e8-aebe-d2656be6fede-kube-api-access-dcwz2\") pod \"redhat-marketplace-mgnj2\" (UID: \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\") " pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:26 crc kubenswrapper[4823]: I0126 15:42:26.881617 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0da8dc-3c65-47e8-aebe-d2656be6fede-utilities\") pod \"redhat-marketplace-mgnj2\" (UID: \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\") " pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:26 crc kubenswrapper[4823]: I0126 15:42:26.881833 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0da8dc-3c65-47e8-aebe-d2656be6fede-catalog-content\") pod \"redhat-marketplace-mgnj2\" (UID: \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\") " pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:26 crc kubenswrapper[4823]: I0126 15:42:26.882327 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0da8dc-3c65-47e8-aebe-d2656be6fede-catalog-content\") pod \"redhat-marketplace-mgnj2\" (UID: \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\") " pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:26 crc kubenswrapper[4823]: I0126 15:42:26.882567 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0da8dc-3c65-47e8-aebe-d2656be6fede-utilities\") pod \"redhat-marketplace-mgnj2\" (UID: \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\") " pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:26 crc kubenswrapper[4823]: I0126 15:42:26.900343 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcwz2\" (UniqueName: \"kubernetes.io/projected/3e0da8dc-3c65-47e8-aebe-d2656be6fede-kube-api-access-dcwz2\") pod \"redhat-marketplace-mgnj2\" (UID: \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\") " pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:27 crc kubenswrapper[4823]: I0126 15:42:27.072479 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:27 crc kubenswrapper[4823]: E0126 15:42:27.316874 4823 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 26 15:42:27 crc kubenswrapper[4823]: E0126 15:42:27.317463 4823 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ceph,ReadOnly:true,MountPath:/etc/ceph,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zhbwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-full_openstack(1529ef7b-113d-479f-b4b7-d134a51539e3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 26 15:42:27 crc kubenswrapper[4823]: E0126 15:42:27.322478 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="1529ef7b-113d-479f-b4b7-d134a51539e3" Jan 26 15:42:27 crc kubenswrapper[4823]: I0126 15:42:27.649221 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fm2dd"] Jan 26 15:42:27 crc kubenswrapper[4823]: I0126 15:42:27.728922 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgnj2"] Jan 26 15:42:27 crc kubenswrapper[4823]: W0126 15:42:27.730095 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e0da8dc_3c65_47e8_aebe_d2656be6fede.slice/crio-ce359540c7acc9309661e1b477b7375d3830499040bfdcef728db0ca2fece6f0 WatchSource:0}: Error finding container ce359540c7acc9309661e1b477b7375d3830499040bfdcef728db0ca2fece6f0: Status 404 returned error can't find the container with id ce359540c7acc9309661e1b477b7375d3830499040bfdcef728db0ca2fece6f0 Jan 26 15:42:27 crc kubenswrapper[4823]: I0126 15:42:27.888456 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm2dd" event={"ID":"b608726d-f21a-4421-a3f8-494fb9ea4de5","Type":"ContainerStarted","Data":"6ec31f4b96d6478a9b8cc1bf97ea6c9f11563fecab4db2e52c3901e5545b2240"} Jan 26 15:42:27 crc kubenswrapper[4823]: I0126 15:42:27.895927 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgnj2" event={"ID":"3e0da8dc-3c65-47e8-aebe-d2656be6fede","Type":"ContainerStarted","Data":"ce359540c7acc9309661e1b477b7375d3830499040bfdcef728db0ca2fece6f0"} Jan 26 15:42:27 crc kubenswrapper[4823]: E0126 15:42:27.897447 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest-s00-full" podUID="1529ef7b-113d-479f-b4b7-d134a51539e3" Jan 26 15:42:28 crc kubenswrapper[4823]: I0126 15:42:28.561202 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:42:28 crc kubenswrapper[4823]: E0126 15:42:28.561960 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:42:28 crc kubenswrapper[4823]: I0126 15:42:28.906539 4823 generic.go:334] "Generic (PLEG): container finished" podID="b608726d-f21a-4421-a3f8-494fb9ea4de5" containerID="c5a4410bd63202b1eec17d0199fee54eee44c6dde512a834f037a2c0c9602144" exitCode=0 Jan 26 15:42:28 crc kubenswrapper[4823]: I0126 15:42:28.906610 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm2dd" event={"ID":"b608726d-f21a-4421-a3f8-494fb9ea4de5","Type":"ContainerDied","Data":"c5a4410bd63202b1eec17d0199fee54eee44c6dde512a834f037a2c0c9602144"} Jan 26 15:42:28 crc kubenswrapper[4823]: I0126 15:42:28.912660 4823 generic.go:334] "Generic (PLEG): container finished" podID="3e0da8dc-3c65-47e8-aebe-d2656be6fede" containerID="d3b42a41854e224715acc1d688d5b7bb3e11e7ec06d8fceabdefda06f1f0b31a" exitCode=0 Jan 26 15:42:28 crc kubenswrapper[4823]: I0126 15:42:28.912765 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgnj2" event={"ID":"3e0da8dc-3c65-47e8-aebe-d2656be6fede","Type":"ContainerDied","Data":"d3b42a41854e224715acc1d688d5b7bb3e11e7ec06d8fceabdefda06f1f0b31a"} Jan 26 15:42:29 crc kubenswrapper[4823]: I0126 15:42:29.924745 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm2dd" event={"ID":"b608726d-f21a-4421-a3f8-494fb9ea4de5","Type":"ContainerStarted","Data":"dcf21755d9e6ac473e1f09d51d3024d3747d16cb2d03541afb449842d45b7c6d"} Jan 26 15:42:29 crc kubenswrapper[4823]: I0126 15:42:29.927601 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgnj2" event={"ID":"3e0da8dc-3c65-47e8-aebe-d2656be6fede","Type":"ContainerStarted","Data":"cad73f61ac72336cc83ceddf957fd8ea4b1badcee4fd3f1ea0c6087ec45e13a7"} Jan 26 15:42:30 crc kubenswrapper[4823]: I0126 15:42:30.939432 4823 generic.go:334] "Generic (PLEG): container finished" podID="3e0da8dc-3c65-47e8-aebe-d2656be6fede" containerID="cad73f61ac72336cc83ceddf957fd8ea4b1badcee4fd3f1ea0c6087ec45e13a7" exitCode=0 Jan 26 15:42:30 crc kubenswrapper[4823]: I0126 15:42:30.939477 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgnj2" event={"ID":"3e0da8dc-3c65-47e8-aebe-d2656be6fede","Type":"ContainerDied","Data":"cad73f61ac72336cc83ceddf957fd8ea4b1badcee4fd3f1ea0c6087ec45e13a7"} Jan 26 15:42:33 crc kubenswrapper[4823]: I0126 15:42:33.970990 4823 generic.go:334] "Generic (PLEG): container finished" podID="b608726d-f21a-4421-a3f8-494fb9ea4de5" containerID="dcf21755d9e6ac473e1f09d51d3024d3747d16cb2d03541afb449842d45b7c6d" exitCode=0 Jan 26 15:42:33 crc kubenswrapper[4823]: I0126 15:42:33.971023 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm2dd" event={"ID":"b608726d-f21a-4421-a3f8-494fb9ea4de5","Type":"ContainerDied","Data":"dcf21755d9e6ac473e1f09d51d3024d3747d16cb2d03541afb449842d45b7c6d"} Jan 26 15:42:35 crc kubenswrapper[4823]: I0126 15:42:35.990845 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgnj2" event={"ID":"3e0da8dc-3c65-47e8-aebe-d2656be6fede","Type":"ContainerStarted","Data":"32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb"} Jan 26 15:42:36 crc kubenswrapper[4823]: I0126 15:42:36.017402 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mgnj2" podStartSLOduration=4.61440113 podStartE2EDuration="10.017384649s" podCreationTimestamp="2026-01-26 15:42:26 +0000 UTC" firstStartedPulling="2026-01-26 15:42:28.914865858 +0000 UTC m=+3345.600328963" lastFinishedPulling="2026-01-26 15:42:34.317849377 +0000 UTC m=+3351.003312482" observedRunningTime="2026-01-26 15:42:36.01449211 +0000 UTC m=+3352.699955225" watchObservedRunningTime="2026-01-26 15:42:36.017384649 +0000 UTC m=+3352.702847754" Jan 26 15:42:37 crc kubenswrapper[4823]: I0126 15:42:37.004320 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm2dd" event={"ID":"b608726d-f21a-4421-a3f8-494fb9ea4de5","Type":"ContainerStarted","Data":"9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437"} Jan 26 15:42:37 crc kubenswrapper[4823]: I0126 15:42:37.032387 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fm2dd" podStartSLOduration=10.050334667 podStartE2EDuration="17.03233979s" podCreationTimestamp="2026-01-26 15:42:20 +0000 UTC" firstStartedPulling="2026-01-26 15:42:28.90870885 +0000 UTC m=+3345.594171955" lastFinishedPulling="2026-01-26 15:42:35.890713953 +0000 UTC m=+3352.576177078" observedRunningTime="2026-01-26 15:42:37.023922181 +0000 UTC m=+3353.709385296" watchObservedRunningTime="2026-01-26 15:42:37.03233979 +0000 UTC m=+3353.717802895" Jan 26 15:42:37 crc kubenswrapper[4823]: I0126 15:42:37.072985 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:37 crc kubenswrapper[4823]: I0126 15:42:37.073037 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:37 crc kubenswrapper[4823]: I0126 15:42:37.122568 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:39 crc kubenswrapper[4823]: I0126 15:42:39.062824 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:39 crc kubenswrapper[4823]: I0126 15:42:39.113051 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgnj2"] Jan 26 15:42:40 crc kubenswrapper[4823]: I0126 15:42:40.473187 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:40 crc kubenswrapper[4823]: I0126 15:42:40.473525 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:41 crc kubenswrapper[4823]: I0126 15:42:41.032891 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mgnj2" podUID="3e0da8dc-3c65-47e8-aebe-d2656be6fede" containerName="registry-server" containerID="cri-o://32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb" gracePeriod=2 Jan 26 15:42:41 crc kubenswrapper[4823]: I0126 15:42:41.515308 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-fm2dd" podUID="b608726d-f21a-4421-a3f8-494fb9ea4de5" containerName="registry-server" probeResult="failure" output=< Jan 26 15:42:41 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 15:42:41 crc kubenswrapper[4823]: > Jan 26 15:42:41 crc kubenswrapper[4823]: I0126 15:42:41.576900 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:41 crc kubenswrapper[4823]: I0126 15:42:41.708833 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0da8dc-3c65-47e8-aebe-d2656be6fede-utilities\") pod \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\" (UID: \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\") " Jan 26 15:42:41 crc kubenswrapper[4823]: I0126 15:42:41.709006 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0da8dc-3c65-47e8-aebe-d2656be6fede-catalog-content\") pod \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\" (UID: \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\") " Jan 26 15:42:41 crc kubenswrapper[4823]: I0126 15:42:41.709275 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcwz2\" (UniqueName: \"kubernetes.io/projected/3e0da8dc-3c65-47e8-aebe-d2656be6fede-kube-api-access-dcwz2\") pod \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\" (UID: \"3e0da8dc-3c65-47e8-aebe-d2656be6fede\") " Jan 26 15:42:41 crc kubenswrapper[4823]: I0126 15:42:41.710160 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0da8dc-3c65-47e8-aebe-d2656be6fede-utilities" (OuterVolumeSpecName: "utilities") pod "3e0da8dc-3c65-47e8-aebe-d2656be6fede" (UID: "3e0da8dc-3c65-47e8-aebe-d2656be6fede"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:42:41 crc kubenswrapper[4823]: I0126 15:42:41.710777 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0da8dc-3c65-47e8-aebe-d2656be6fede-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:41 crc kubenswrapper[4823]: I0126 15:42:41.716083 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0da8dc-3c65-47e8-aebe-d2656be6fede-kube-api-access-dcwz2" (OuterVolumeSpecName: "kube-api-access-dcwz2") pod "3e0da8dc-3c65-47e8-aebe-d2656be6fede" (UID: "3e0da8dc-3c65-47e8-aebe-d2656be6fede"). InnerVolumeSpecName "kube-api-access-dcwz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:42:41 crc kubenswrapper[4823]: I0126 15:42:41.732497 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0da8dc-3c65-47e8-aebe-d2656be6fede-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e0da8dc-3c65-47e8-aebe-d2656be6fede" (UID: "3e0da8dc-3c65-47e8-aebe-d2656be6fede"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:42:41 crc kubenswrapper[4823]: I0126 15:42:41.812990 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0da8dc-3c65-47e8-aebe-d2656be6fede-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:41 crc kubenswrapper[4823]: I0126 15:42:41.813027 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcwz2\" (UniqueName: \"kubernetes.io/projected/3e0da8dc-3c65-47e8-aebe-d2656be6fede-kube-api-access-dcwz2\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.045849 4823 generic.go:334] "Generic (PLEG): container finished" podID="3e0da8dc-3c65-47e8-aebe-d2656be6fede" containerID="32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb" exitCode=0 Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.045894 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mgnj2" Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.045910 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgnj2" event={"ID":"3e0da8dc-3c65-47e8-aebe-d2656be6fede","Type":"ContainerDied","Data":"32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb"} Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.046386 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mgnj2" event={"ID":"3e0da8dc-3c65-47e8-aebe-d2656be6fede","Type":"ContainerDied","Data":"ce359540c7acc9309661e1b477b7375d3830499040bfdcef728db0ca2fece6f0"} Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.046417 4823 scope.go:117] "RemoveContainer" containerID="32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb" Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.085415 4823 scope.go:117] "RemoveContainer" containerID="cad73f61ac72336cc83ceddf957fd8ea4b1badcee4fd3f1ea0c6087ec45e13a7" Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.092641 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgnj2"] Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.101316 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mgnj2"] Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.114558 4823 scope.go:117] "RemoveContainer" containerID="d3b42a41854e224715acc1d688d5b7bb3e11e7ec06d8fceabdefda06f1f0b31a" Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.134254 4823 scope.go:117] "RemoveContainer" containerID="32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb" Jan 26 15:42:42 crc kubenswrapper[4823]: E0126 15:42:42.136325 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb\": container with ID starting with 32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb not found: ID does not exist" containerID="32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb" Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.136421 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb"} err="failed to get container status \"32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb\": rpc error: code = NotFound desc = could not find container \"32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb\": container with ID starting with 32fe883dc6a3ebeded18b6aaa4e2b91bb6cebcd31bb336554597cbb76024a4eb not found: ID does not exist" Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.136451 4823 scope.go:117] "RemoveContainer" containerID="cad73f61ac72336cc83ceddf957fd8ea4b1badcee4fd3f1ea0c6087ec45e13a7" Jan 26 15:42:42 crc kubenswrapper[4823]: E0126 15:42:42.136890 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad73f61ac72336cc83ceddf957fd8ea4b1badcee4fd3f1ea0c6087ec45e13a7\": container with ID starting with cad73f61ac72336cc83ceddf957fd8ea4b1badcee4fd3f1ea0c6087ec45e13a7 not found: ID does not exist" containerID="cad73f61ac72336cc83ceddf957fd8ea4b1badcee4fd3f1ea0c6087ec45e13a7" Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.136933 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad73f61ac72336cc83ceddf957fd8ea4b1badcee4fd3f1ea0c6087ec45e13a7"} err="failed to get container status \"cad73f61ac72336cc83ceddf957fd8ea4b1badcee4fd3f1ea0c6087ec45e13a7\": rpc error: code = NotFound desc = could not find container \"cad73f61ac72336cc83ceddf957fd8ea4b1badcee4fd3f1ea0c6087ec45e13a7\": container with ID starting with cad73f61ac72336cc83ceddf957fd8ea4b1badcee4fd3f1ea0c6087ec45e13a7 not found: ID does not exist" Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.136959 4823 scope.go:117] "RemoveContainer" containerID="d3b42a41854e224715acc1d688d5b7bb3e11e7ec06d8fceabdefda06f1f0b31a" Jan 26 15:42:42 crc kubenswrapper[4823]: E0126 15:42:42.137353 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3b42a41854e224715acc1d688d5b7bb3e11e7ec06d8fceabdefda06f1f0b31a\": container with ID starting with d3b42a41854e224715acc1d688d5b7bb3e11e7ec06d8fceabdefda06f1f0b31a not found: ID does not exist" containerID="d3b42a41854e224715acc1d688d5b7bb3e11e7ec06d8fceabdefda06f1f0b31a" Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.137418 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3b42a41854e224715acc1d688d5b7bb3e11e7ec06d8fceabdefda06f1f0b31a"} err="failed to get container status \"d3b42a41854e224715acc1d688d5b7bb3e11e7ec06d8fceabdefda06f1f0b31a\": rpc error: code = NotFound desc = could not find container \"d3b42a41854e224715acc1d688d5b7bb3e11e7ec06d8fceabdefda06f1f0b31a\": container with ID starting with d3b42a41854e224715acc1d688d5b7bb3e11e7ec06d8fceabdefda06f1f0b31a not found: ID does not exist" Jan 26 15:42:42 crc kubenswrapper[4823]: I0126 15:42:42.574471 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:42:43 crc kubenswrapper[4823]: I0126 15:42:43.061581 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"949d4561f7ecaa2ea1507042cfa385d35a3f18b3ab1d6f9e7dc89ab248eb28ac"} Jan 26 15:42:43 crc kubenswrapper[4823]: I0126 15:42:43.574271 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e0da8dc-3c65-47e8-aebe-d2656be6fede" path="/var/lib/kubelet/pods/3e0da8dc-3c65-47e8-aebe-d2656be6fede/volumes" Jan 26 15:42:44 crc kubenswrapper[4823]: I0126 15:42:44.077671 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"1529ef7b-113d-479f-b4b7-d134a51539e3","Type":"ContainerStarted","Data":"c95d8fe1be519296ed4f5ffd641140beba45cfa4cbe03c889a27f4a50ce1e91c"} Jan 26 15:42:44 crc kubenswrapper[4823]: I0126 15:42:44.108783 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-full" podStartSLOduration=4.502910036 podStartE2EDuration="51.108757313s" podCreationTimestamp="2026-01-26 15:41:53 +0000 UTC" firstStartedPulling="2026-01-26 15:41:55.385896571 +0000 UTC m=+3312.071359676" lastFinishedPulling="2026-01-26 15:42:41.991743848 +0000 UTC m=+3358.677206953" observedRunningTime="2026-01-26 15:42:44.098829884 +0000 UTC m=+3360.784292999" watchObservedRunningTime="2026-01-26 15:42:44.108757313 +0000 UTC m=+3360.794220418" Jan 26 15:42:50 crc kubenswrapper[4823]: I0126 15:42:50.527560 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:50 crc kubenswrapper[4823]: I0126 15:42:50.580295 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:51 crc kubenswrapper[4823]: I0126 15:42:51.272192 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fm2dd"] Jan 26 15:42:52 crc kubenswrapper[4823]: I0126 15:42:52.148324 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fm2dd" podUID="b608726d-f21a-4421-a3f8-494fb9ea4de5" containerName="registry-server" containerID="cri-o://9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437" gracePeriod=2 Jan 26 15:42:52 crc kubenswrapper[4823]: I0126 15:42:52.653664 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:52 crc kubenswrapper[4823]: I0126 15:42:52.751134 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b608726d-f21a-4421-a3f8-494fb9ea4de5-catalog-content\") pod \"b608726d-f21a-4421-a3f8-494fb9ea4de5\" (UID: \"b608726d-f21a-4421-a3f8-494fb9ea4de5\") " Jan 26 15:42:52 crc kubenswrapper[4823]: I0126 15:42:52.751246 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2lr7\" (UniqueName: \"kubernetes.io/projected/b608726d-f21a-4421-a3f8-494fb9ea4de5-kube-api-access-f2lr7\") pod \"b608726d-f21a-4421-a3f8-494fb9ea4de5\" (UID: \"b608726d-f21a-4421-a3f8-494fb9ea4de5\") " Jan 26 15:42:52 crc kubenswrapper[4823]: I0126 15:42:52.751647 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b608726d-f21a-4421-a3f8-494fb9ea4de5-utilities\") pod \"b608726d-f21a-4421-a3f8-494fb9ea4de5\" (UID: \"b608726d-f21a-4421-a3f8-494fb9ea4de5\") " Jan 26 15:42:52 crc kubenswrapper[4823]: I0126 15:42:52.760220 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b608726d-f21a-4421-a3f8-494fb9ea4de5-utilities" (OuterVolumeSpecName: "utilities") pod "b608726d-f21a-4421-a3f8-494fb9ea4de5" (UID: "b608726d-f21a-4421-a3f8-494fb9ea4de5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:42:52 crc kubenswrapper[4823]: I0126 15:42:52.762654 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b608726d-f21a-4421-a3f8-494fb9ea4de5-kube-api-access-f2lr7" (OuterVolumeSpecName: "kube-api-access-f2lr7") pod "b608726d-f21a-4421-a3f8-494fb9ea4de5" (UID: "b608726d-f21a-4421-a3f8-494fb9ea4de5"). InnerVolumeSpecName "kube-api-access-f2lr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:42:52 crc kubenswrapper[4823]: I0126 15:42:52.853933 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2lr7\" (UniqueName: \"kubernetes.io/projected/b608726d-f21a-4421-a3f8-494fb9ea4de5-kube-api-access-f2lr7\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:52 crc kubenswrapper[4823]: I0126 15:42:52.853979 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b608726d-f21a-4421-a3f8-494fb9ea4de5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:52 crc kubenswrapper[4823]: I0126 15:42:52.892988 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b608726d-f21a-4421-a3f8-494fb9ea4de5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b608726d-f21a-4421-a3f8-494fb9ea4de5" (UID: "b608726d-f21a-4421-a3f8-494fb9ea4de5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:42:52 crc kubenswrapper[4823]: I0126 15:42:52.955695 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b608726d-f21a-4421-a3f8-494fb9ea4de5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.159753 4823 generic.go:334] "Generic (PLEG): container finished" podID="b608726d-f21a-4421-a3f8-494fb9ea4de5" containerID="9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437" exitCode=0 Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.159806 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm2dd" event={"ID":"b608726d-f21a-4421-a3f8-494fb9ea4de5","Type":"ContainerDied","Data":"9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437"} Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.159835 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fm2dd" event={"ID":"b608726d-f21a-4421-a3f8-494fb9ea4de5","Type":"ContainerDied","Data":"6ec31f4b96d6478a9b8cc1bf97ea6c9f11563fecab4db2e52c3901e5545b2240"} Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.159856 4823 scope.go:117] "RemoveContainer" containerID="9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437" Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.159810 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fm2dd" Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.191227 4823 scope.go:117] "RemoveContainer" containerID="dcf21755d9e6ac473e1f09d51d3024d3747d16cb2d03541afb449842d45b7c6d" Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.196881 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fm2dd"] Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.208524 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fm2dd"] Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.215599 4823 scope.go:117] "RemoveContainer" containerID="c5a4410bd63202b1eec17d0199fee54eee44c6dde512a834f037a2c0c9602144" Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.263198 4823 scope.go:117] "RemoveContainer" containerID="9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437" Jan 26 15:42:53 crc kubenswrapper[4823]: E0126 15:42:53.287531 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437\": container with ID starting with 9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437 not found: ID does not exist" containerID="9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437" Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.287595 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437"} err="failed to get container status \"9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437\": rpc error: code = NotFound desc = could not find container \"9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437\": container with ID starting with 9e4dd2b0057d5b7aa823bcc667b784d70f98afe9d49212a705aac8e3861d5437 not found: ID does not exist" Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.287627 4823 scope.go:117] "RemoveContainer" containerID="dcf21755d9e6ac473e1f09d51d3024d3747d16cb2d03541afb449842d45b7c6d" Jan 26 15:42:53 crc kubenswrapper[4823]: E0126 15:42:53.290733 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcf21755d9e6ac473e1f09d51d3024d3747d16cb2d03541afb449842d45b7c6d\": container with ID starting with dcf21755d9e6ac473e1f09d51d3024d3747d16cb2d03541afb449842d45b7c6d not found: ID does not exist" containerID="dcf21755d9e6ac473e1f09d51d3024d3747d16cb2d03541afb449842d45b7c6d" Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.290785 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcf21755d9e6ac473e1f09d51d3024d3747d16cb2d03541afb449842d45b7c6d"} err="failed to get container status \"dcf21755d9e6ac473e1f09d51d3024d3747d16cb2d03541afb449842d45b7c6d\": rpc error: code = NotFound desc = could not find container \"dcf21755d9e6ac473e1f09d51d3024d3747d16cb2d03541afb449842d45b7c6d\": container with ID starting with dcf21755d9e6ac473e1f09d51d3024d3747d16cb2d03541afb449842d45b7c6d not found: ID does not exist" Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.290820 4823 scope.go:117] "RemoveContainer" containerID="c5a4410bd63202b1eec17d0199fee54eee44c6dde512a834f037a2c0c9602144" Jan 26 15:42:53 crc kubenswrapper[4823]: E0126 15:42:53.291300 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a4410bd63202b1eec17d0199fee54eee44c6dde512a834f037a2c0c9602144\": container with ID starting with c5a4410bd63202b1eec17d0199fee54eee44c6dde512a834f037a2c0c9602144 not found: ID does not exist" containerID="c5a4410bd63202b1eec17d0199fee54eee44c6dde512a834f037a2c0c9602144" Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.291326 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a4410bd63202b1eec17d0199fee54eee44c6dde512a834f037a2c0c9602144"} err="failed to get container status \"c5a4410bd63202b1eec17d0199fee54eee44c6dde512a834f037a2c0c9602144\": rpc error: code = NotFound desc = could not find container \"c5a4410bd63202b1eec17d0199fee54eee44c6dde512a834f037a2c0c9602144\": container with ID starting with c5a4410bd63202b1eec17d0199fee54eee44c6dde512a834f037a2c0c9602144 not found: ID does not exist" Jan 26 15:42:53 crc kubenswrapper[4823]: I0126 15:42:53.569481 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b608726d-f21a-4421-a3f8-494fb9ea4de5" path="/var/lib/kubelet/pods/b608726d-f21a-4421-a3f8-494fb9ea4de5/volumes" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.363416 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hrh7z"] Jan 26 15:43:49 crc kubenswrapper[4823]: E0126 15:43:49.364617 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0da8dc-3c65-47e8-aebe-d2656be6fede" containerName="extract-utilities" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.364662 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0da8dc-3c65-47e8-aebe-d2656be6fede" containerName="extract-utilities" Jan 26 15:43:49 crc kubenswrapper[4823]: E0126 15:43:49.364687 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b608726d-f21a-4421-a3f8-494fb9ea4de5" containerName="extract-content" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.364831 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b608726d-f21a-4421-a3f8-494fb9ea4de5" containerName="extract-content" Jan 26 15:43:49 crc kubenswrapper[4823]: E0126 15:43:49.364850 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0da8dc-3c65-47e8-aebe-d2656be6fede" containerName="registry-server" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.364860 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0da8dc-3c65-47e8-aebe-d2656be6fede" containerName="registry-server" Jan 26 15:43:49 crc kubenswrapper[4823]: E0126 15:43:49.364886 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0da8dc-3c65-47e8-aebe-d2656be6fede" containerName="extract-content" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.364894 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0da8dc-3c65-47e8-aebe-d2656be6fede" containerName="extract-content" Jan 26 15:43:49 crc kubenswrapper[4823]: E0126 15:43:49.364917 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b608726d-f21a-4421-a3f8-494fb9ea4de5" containerName="registry-server" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.364927 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b608726d-f21a-4421-a3f8-494fb9ea4de5" containerName="registry-server" Jan 26 15:43:49 crc kubenswrapper[4823]: E0126 15:43:49.364948 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b608726d-f21a-4421-a3f8-494fb9ea4de5" containerName="extract-utilities" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.364957 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b608726d-f21a-4421-a3f8-494fb9ea4de5" containerName="extract-utilities" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.365305 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b608726d-f21a-4421-a3f8-494fb9ea4de5" containerName="registry-server" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.365334 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0da8dc-3c65-47e8-aebe-d2656be6fede" containerName="registry-server" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.367358 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.389049 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hrh7z"] Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.491733 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zfk9\" (UniqueName: \"kubernetes.io/projected/d0248b7c-2819-4617-91d6-3e85f583e8c6-kube-api-access-2zfk9\") pod \"community-operators-hrh7z\" (UID: \"d0248b7c-2819-4617-91d6-3e85f583e8c6\") " pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.492026 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0248b7c-2819-4617-91d6-3e85f583e8c6-utilities\") pod \"community-operators-hrh7z\" (UID: \"d0248b7c-2819-4617-91d6-3e85f583e8c6\") " pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.492159 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0248b7c-2819-4617-91d6-3e85f583e8c6-catalog-content\") pod \"community-operators-hrh7z\" (UID: \"d0248b7c-2819-4617-91d6-3e85f583e8c6\") " pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.593900 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zfk9\" (UniqueName: \"kubernetes.io/projected/d0248b7c-2819-4617-91d6-3e85f583e8c6-kube-api-access-2zfk9\") pod \"community-operators-hrh7z\" (UID: \"d0248b7c-2819-4617-91d6-3e85f583e8c6\") " pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.593969 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0248b7c-2819-4617-91d6-3e85f583e8c6-utilities\") pod \"community-operators-hrh7z\" (UID: \"d0248b7c-2819-4617-91d6-3e85f583e8c6\") " pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.594009 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0248b7c-2819-4617-91d6-3e85f583e8c6-catalog-content\") pod \"community-operators-hrh7z\" (UID: \"d0248b7c-2819-4617-91d6-3e85f583e8c6\") " pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.594528 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0248b7c-2819-4617-91d6-3e85f583e8c6-catalog-content\") pod \"community-operators-hrh7z\" (UID: \"d0248b7c-2819-4617-91d6-3e85f583e8c6\") " pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.594860 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0248b7c-2819-4617-91d6-3e85f583e8c6-utilities\") pod \"community-operators-hrh7z\" (UID: \"d0248b7c-2819-4617-91d6-3e85f583e8c6\") " pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.615277 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zfk9\" (UniqueName: \"kubernetes.io/projected/d0248b7c-2819-4617-91d6-3e85f583e8c6-kube-api-access-2zfk9\") pod \"community-operators-hrh7z\" (UID: \"d0248b7c-2819-4617-91d6-3e85f583e8c6\") " pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:49 crc kubenswrapper[4823]: I0126 15:43:49.704484 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:50 crc kubenswrapper[4823]: I0126 15:43:50.281129 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hrh7z"] Jan 26 15:43:50 crc kubenswrapper[4823]: W0126 15:43:50.285571 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0248b7c_2819_4617_91d6_3e85f583e8c6.slice/crio-41ed6dafcad394bca845d95c1a78b4967066e1e126e60cb0bca4d571737f1668 WatchSource:0}: Error finding container 41ed6dafcad394bca845d95c1a78b4967066e1e126e60cb0bca4d571737f1668: Status 404 returned error can't find the container with id 41ed6dafcad394bca845d95c1a78b4967066e1e126e60cb0bca4d571737f1668 Jan 26 15:43:50 crc kubenswrapper[4823]: I0126 15:43:50.780398 4823 generic.go:334] "Generic (PLEG): container finished" podID="d0248b7c-2819-4617-91d6-3e85f583e8c6" containerID="d9149dfcfa8a578713ffb8f57eb9d0b4d08bc8a8cc76dbeb9943ca03a062ee22" exitCode=0 Jan 26 15:43:50 crc kubenswrapper[4823]: I0126 15:43:50.780578 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrh7z" event={"ID":"d0248b7c-2819-4617-91d6-3e85f583e8c6","Type":"ContainerDied","Data":"d9149dfcfa8a578713ffb8f57eb9d0b4d08bc8a8cc76dbeb9943ca03a062ee22"} Jan 26 15:43:50 crc kubenswrapper[4823]: I0126 15:43:50.780711 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrh7z" event={"ID":"d0248b7c-2819-4617-91d6-3e85f583e8c6","Type":"ContainerStarted","Data":"41ed6dafcad394bca845d95c1a78b4967066e1e126e60cb0bca4d571737f1668"} Jan 26 15:43:51 crc kubenswrapper[4823]: I0126 15:43:51.804410 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrh7z" event={"ID":"d0248b7c-2819-4617-91d6-3e85f583e8c6","Type":"ContainerStarted","Data":"38c36258951d8325a320fd6f0684d83562b3d13985c7373c30e842c1f47a817e"} Jan 26 15:43:52 crc kubenswrapper[4823]: I0126 15:43:52.814176 4823 generic.go:334] "Generic (PLEG): container finished" podID="d0248b7c-2819-4617-91d6-3e85f583e8c6" containerID="38c36258951d8325a320fd6f0684d83562b3d13985c7373c30e842c1f47a817e" exitCode=0 Jan 26 15:43:52 crc kubenswrapper[4823]: I0126 15:43:52.814267 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrh7z" event={"ID":"d0248b7c-2819-4617-91d6-3e85f583e8c6","Type":"ContainerDied","Data":"38c36258951d8325a320fd6f0684d83562b3d13985c7373c30e842c1f47a817e"} Jan 26 15:43:53 crc kubenswrapper[4823]: I0126 15:43:53.829214 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrh7z" event={"ID":"d0248b7c-2819-4617-91d6-3e85f583e8c6","Type":"ContainerStarted","Data":"aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20"} Jan 26 15:43:53 crc kubenswrapper[4823]: I0126 15:43:53.863744 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hrh7z" podStartSLOduration=2.411808037 podStartE2EDuration="4.863719856s" podCreationTimestamp="2026-01-26 15:43:49 +0000 UTC" firstStartedPulling="2026-01-26 15:43:50.782082847 +0000 UTC m=+3427.467545952" lastFinishedPulling="2026-01-26 15:43:53.233994666 +0000 UTC m=+3429.919457771" observedRunningTime="2026-01-26 15:43:53.857319311 +0000 UTC m=+3430.542782466" watchObservedRunningTime="2026-01-26 15:43:53.863719856 +0000 UTC m=+3430.549182961" Jan 26 15:43:59 crc kubenswrapper[4823]: I0126 15:43:59.705588 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:59 crc kubenswrapper[4823]: I0126 15:43:59.705940 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:59 crc kubenswrapper[4823]: I0126 15:43:59.756117 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:59 crc kubenswrapper[4823]: I0126 15:43:59.918042 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:43:59 crc kubenswrapper[4823]: I0126 15:43:59.996053 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hrh7z"] Jan 26 15:44:01 crc kubenswrapper[4823]: I0126 15:44:01.888892 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hrh7z" podUID="d0248b7c-2819-4617-91d6-3e85f583e8c6" containerName="registry-server" containerID="cri-o://aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20" gracePeriod=2 Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.846550 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.861776 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0248b7c-2819-4617-91d6-3e85f583e8c6-utilities\") pod \"d0248b7c-2819-4617-91d6-3e85f583e8c6\" (UID: \"d0248b7c-2819-4617-91d6-3e85f583e8c6\") " Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.861855 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0248b7c-2819-4617-91d6-3e85f583e8c6-catalog-content\") pod \"d0248b7c-2819-4617-91d6-3e85f583e8c6\" (UID: \"d0248b7c-2819-4617-91d6-3e85f583e8c6\") " Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.861891 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zfk9\" (UniqueName: \"kubernetes.io/projected/d0248b7c-2819-4617-91d6-3e85f583e8c6-kube-api-access-2zfk9\") pod \"d0248b7c-2819-4617-91d6-3e85f583e8c6\" (UID: \"d0248b7c-2819-4617-91d6-3e85f583e8c6\") " Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.863004 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0248b7c-2819-4617-91d6-3e85f583e8c6-utilities" (OuterVolumeSpecName: "utilities") pod "d0248b7c-2819-4617-91d6-3e85f583e8c6" (UID: "d0248b7c-2819-4617-91d6-3e85f583e8c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.863409 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0248b7c-2819-4617-91d6-3e85f583e8c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.868317 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0248b7c-2819-4617-91d6-3e85f583e8c6-kube-api-access-2zfk9" (OuterVolumeSpecName: "kube-api-access-2zfk9") pod "d0248b7c-2819-4617-91d6-3e85f583e8c6" (UID: "d0248b7c-2819-4617-91d6-3e85f583e8c6"). InnerVolumeSpecName "kube-api-access-2zfk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.900988 4823 generic.go:334] "Generic (PLEG): container finished" podID="d0248b7c-2819-4617-91d6-3e85f583e8c6" containerID="aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20" exitCode=0 Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.901032 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrh7z" event={"ID":"d0248b7c-2819-4617-91d6-3e85f583e8c6","Type":"ContainerDied","Data":"aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20"} Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.902399 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hrh7z" event={"ID":"d0248b7c-2819-4617-91d6-3e85f583e8c6","Type":"ContainerDied","Data":"41ed6dafcad394bca845d95c1a78b4967066e1e126e60cb0bca4d571737f1668"} Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.901050 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hrh7z" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.902452 4823 scope.go:117] "RemoveContainer" containerID="aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.922275 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0248b7c-2819-4617-91d6-3e85f583e8c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0248b7c-2819-4617-91d6-3e85f583e8c6" (UID: "d0248b7c-2819-4617-91d6-3e85f583e8c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.933281 4823 scope.go:117] "RemoveContainer" containerID="38c36258951d8325a320fd6f0684d83562b3d13985c7373c30e842c1f47a817e" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.958149 4823 scope.go:117] "RemoveContainer" containerID="d9149dfcfa8a578713ffb8f57eb9d0b4d08bc8a8cc76dbeb9943ca03a062ee22" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.964858 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0248b7c-2819-4617-91d6-3e85f583e8c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.965059 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zfk9\" (UniqueName: \"kubernetes.io/projected/d0248b7c-2819-4617-91d6-3e85f583e8c6-kube-api-access-2zfk9\") on node \"crc\" DevicePath \"\"" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.997598 4823 scope.go:117] "RemoveContainer" containerID="aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20" Jan 26 15:44:02 crc kubenswrapper[4823]: E0126 15:44:02.998478 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20\": container with ID starting with aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20 not found: ID does not exist" containerID="aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.998511 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20"} err="failed to get container status \"aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20\": rpc error: code = NotFound desc = could not find container \"aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20\": container with ID starting with aa1e133c54466b96c22c2a40c78b9c4d130f1ccd8a44f8adac66d888bad19e20 not found: ID does not exist" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.998540 4823 scope.go:117] "RemoveContainer" containerID="38c36258951d8325a320fd6f0684d83562b3d13985c7373c30e842c1f47a817e" Jan 26 15:44:02 crc kubenswrapper[4823]: E0126 15:44:02.998849 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38c36258951d8325a320fd6f0684d83562b3d13985c7373c30e842c1f47a817e\": container with ID starting with 38c36258951d8325a320fd6f0684d83562b3d13985c7373c30e842c1f47a817e not found: ID does not exist" containerID="38c36258951d8325a320fd6f0684d83562b3d13985c7373c30e842c1f47a817e" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.998871 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38c36258951d8325a320fd6f0684d83562b3d13985c7373c30e842c1f47a817e"} err="failed to get container status \"38c36258951d8325a320fd6f0684d83562b3d13985c7373c30e842c1f47a817e\": rpc error: code = NotFound desc = could not find container \"38c36258951d8325a320fd6f0684d83562b3d13985c7373c30e842c1f47a817e\": container with ID starting with 38c36258951d8325a320fd6f0684d83562b3d13985c7373c30e842c1f47a817e not found: ID does not exist" Jan 26 15:44:02 crc kubenswrapper[4823]: I0126 15:44:02.998885 4823 scope.go:117] "RemoveContainer" containerID="d9149dfcfa8a578713ffb8f57eb9d0b4d08bc8a8cc76dbeb9943ca03a062ee22" Jan 26 15:44:02 crc kubenswrapper[4823]: E0126 15:44:02.999834 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9149dfcfa8a578713ffb8f57eb9d0b4d08bc8a8cc76dbeb9943ca03a062ee22\": container with ID starting with d9149dfcfa8a578713ffb8f57eb9d0b4d08bc8a8cc76dbeb9943ca03a062ee22 not found: ID does not exist" containerID="d9149dfcfa8a578713ffb8f57eb9d0b4d08bc8a8cc76dbeb9943ca03a062ee22" Jan 26 15:44:03 crc kubenswrapper[4823]: I0126 15:44:03.000126 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9149dfcfa8a578713ffb8f57eb9d0b4d08bc8a8cc76dbeb9943ca03a062ee22"} err="failed to get container status \"d9149dfcfa8a578713ffb8f57eb9d0b4d08bc8a8cc76dbeb9943ca03a062ee22\": rpc error: code = NotFound desc = could not find container \"d9149dfcfa8a578713ffb8f57eb9d0b4d08bc8a8cc76dbeb9943ca03a062ee22\": container with ID starting with d9149dfcfa8a578713ffb8f57eb9d0b4d08bc8a8cc76dbeb9943ca03a062ee22 not found: ID does not exist" Jan 26 15:44:03 crc kubenswrapper[4823]: I0126 15:44:03.238295 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hrh7z"] Jan 26 15:44:03 crc kubenswrapper[4823]: I0126 15:44:03.247568 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hrh7z"] Jan 26 15:44:03 crc kubenswrapper[4823]: I0126 15:44:03.571730 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0248b7c-2819-4617-91d6-3e85f583e8c6" path="/var/lib/kubelet/pods/d0248b7c-2819-4617-91d6-3e85f583e8c6/volumes" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.141641 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw"] Jan 26 15:45:00 crc kubenswrapper[4823]: E0126 15:45:00.142700 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0248b7c-2819-4617-91d6-3e85f583e8c6" containerName="extract-content" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.142719 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0248b7c-2819-4617-91d6-3e85f583e8c6" containerName="extract-content" Jan 26 15:45:00 crc kubenswrapper[4823]: E0126 15:45:00.142748 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0248b7c-2819-4617-91d6-3e85f583e8c6" containerName="registry-server" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.142755 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0248b7c-2819-4617-91d6-3e85f583e8c6" containerName="registry-server" Jan 26 15:45:00 crc kubenswrapper[4823]: E0126 15:45:00.142775 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0248b7c-2819-4617-91d6-3e85f583e8c6" containerName="extract-utilities" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.142785 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0248b7c-2819-4617-91d6-3e85f583e8c6" containerName="extract-utilities" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.143009 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0248b7c-2819-4617-91d6-3e85f583e8c6" containerName="registry-server" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.143814 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.146218 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.146541 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.154922 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw"] Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.290262 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1affab3-fe81-427e-a854-2f53a8f705f1-config-volume\") pod \"collect-profiles-29490705-7wgkw\" (UID: \"e1affab3-fe81-427e-a854-2f53a8f705f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.290623 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1affab3-fe81-427e-a854-2f53a8f705f1-secret-volume\") pod \"collect-profiles-29490705-7wgkw\" (UID: \"e1affab3-fe81-427e-a854-2f53a8f705f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.290691 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txgrv\" (UniqueName: \"kubernetes.io/projected/e1affab3-fe81-427e-a854-2f53a8f705f1-kube-api-access-txgrv\") pod \"collect-profiles-29490705-7wgkw\" (UID: \"e1affab3-fe81-427e-a854-2f53a8f705f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.392915 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1affab3-fe81-427e-a854-2f53a8f705f1-secret-volume\") pod \"collect-profiles-29490705-7wgkw\" (UID: \"e1affab3-fe81-427e-a854-2f53a8f705f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.393041 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txgrv\" (UniqueName: \"kubernetes.io/projected/e1affab3-fe81-427e-a854-2f53a8f705f1-kube-api-access-txgrv\") pod \"collect-profiles-29490705-7wgkw\" (UID: \"e1affab3-fe81-427e-a854-2f53a8f705f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.393229 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1affab3-fe81-427e-a854-2f53a8f705f1-config-volume\") pod \"collect-profiles-29490705-7wgkw\" (UID: \"e1affab3-fe81-427e-a854-2f53a8f705f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.400402 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1affab3-fe81-427e-a854-2f53a8f705f1-config-volume\") pod \"collect-profiles-29490705-7wgkw\" (UID: \"e1affab3-fe81-427e-a854-2f53a8f705f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.409999 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1affab3-fe81-427e-a854-2f53a8f705f1-secret-volume\") pod \"collect-profiles-29490705-7wgkw\" (UID: \"e1affab3-fe81-427e-a854-2f53a8f705f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.416673 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txgrv\" (UniqueName: \"kubernetes.io/projected/e1affab3-fe81-427e-a854-2f53a8f705f1-kube-api-access-txgrv\") pod \"collect-profiles-29490705-7wgkw\" (UID: \"e1affab3-fe81-427e-a854-2f53a8f705f1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.512073 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:00 crc kubenswrapper[4823]: I0126 15:45:00.985167 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw"] Jan 26 15:45:01 crc kubenswrapper[4823]: I0126 15:45:01.452536 4823 generic.go:334] "Generic (PLEG): container finished" podID="e1affab3-fe81-427e-a854-2f53a8f705f1" containerID="74768d6742faded9cb18583e9239d41c6892e48f1f0775a43d05652514070de1" exitCode=0 Jan 26 15:45:01 crc kubenswrapper[4823]: I0126 15:45:01.452591 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" event={"ID":"e1affab3-fe81-427e-a854-2f53a8f705f1","Type":"ContainerDied","Data":"74768d6742faded9cb18583e9239d41c6892e48f1f0775a43d05652514070de1"} Jan 26 15:45:01 crc kubenswrapper[4823]: I0126 15:45:01.452850 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" event={"ID":"e1affab3-fe81-427e-a854-2f53a8f705f1","Type":"ContainerStarted","Data":"b905f0ec09e828e2e1c3822b7f2c8c4a4b76c455cdec541a8a230095ca0aeb90"} Jan 26 15:45:02 crc kubenswrapper[4823]: I0126 15:45:02.822459 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:02 crc kubenswrapper[4823]: I0126 15:45:02.944505 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txgrv\" (UniqueName: \"kubernetes.io/projected/e1affab3-fe81-427e-a854-2f53a8f705f1-kube-api-access-txgrv\") pod \"e1affab3-fe81-427e-a854-2f53a8f705f1\" (UID: \"e1affab3-fe81-427e-a854-2f53a8f705f1\") " Jan 26 15:45:02 crc kubenswrapper[4823]: I0126 15:45:02.944598 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1affab3-fe81-427e-a854-2f53a8f705f1-config-volume\") pod \"e1affab3-fe81-427e-a854-2f53a8f705f1\" (UID: \"e1affab3-fe81-427e-a854-2f53a8f705f1\") " Jan 26 15:45:02 crc kubenswrapper[4823]: I0126 15:45:02.944801 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1affab3-fe81-427e-a854-2f53a8f705f1-secret-volume\") pod \"e1affab3-fe81-427e-a854-2f53a8f705f1\" (UID: \"e1affab3-fe81-427e-a854-2f53a8f705f1\") " Jan 26 15:45:02 crc kubenswrapper[4823]: I0126 15:45:02.945464 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1affab3-fe81-427e-a854-2f53a8f705f1-config-volume" (OuterVolumeSpecName: "config-volume") pod "e1affab3-fe81-427e-a854-2f53a8f705f1" (UID: "e1affab3-fe81-427e-a854-2f53a8f705f1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 15:45:02 crc kubenswrapper[4823]: I0126 15:45:02.946067 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1affab3-fe81-427e-a854-2f53a8f705f1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:02 crc kubenswrapper[4823]: I0126 15:45:02.950610 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1affab3-fe81-427e-a854-2f53a8f705f1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e1affab3-fe81-427e-a854-2f53a8f705f1" (UID: "e1affab3-fe81-427e-a854-2f53a8f705f1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 15:45:02 crc kubenswrapper[4823]: I0126 15:45:02.950697 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1affab3-fe81-427e-a854-2f53a8f705f1-kube-api-access-txgrv" (OuterVolumeSpecName: "kube-api-access-txgrv") pod "e1affab3-fe81-427e-a854-2f53a8f705f1" (UID: "e1affab3-fe81-427e-a854-2f53a8f705f1"). InnerVolumeSpecName "kube-api-access-txgrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:45:03 crc kubenswrapper[4823]: I0126 15:45:03.049227 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1affab3-fe81-427e-a854-2f53a8f705f1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:03 crc kubenswrapper[4823]: I0126 15:45:03.049301 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txgrv\" (UniqueName: \"kubernetes.io/projected/e1affab3-fe81-427e-a854-2f53a8f705f1-kube-api-access-txgrv\") on node \"crc\" DevicePath \"\"" Jan 26 15:45:03 crc kubenswrapper[4823]: I0126 15:45:03.472202 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" event={"ID":"e1affab3-fe81-427e-a854-2f53a8f705f1","Type":"ContainerDied","Data":"b905f0ec09e828e2e1c3822b7f2c8c4a4b76c455cdec541a8a230095ca0aeb90"} Jan 26 15:45:03 crc kubenswrapper[4823]: I0126 15:45:03.472249 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b905f0ec09e828e2e1c3822b7f2c8c4a4b76c455cdec541a8a230095ca0aeb90" Jan 26 15:45:03 crc kubenswrapper[4823]: I0126 15:45:03.472313 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw" Jan 26 15:45:03 crc kubenswrapper[4823]: I0126 15:45:03.907829 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l"] Jan 26 15:45:03 crc kubenswrapper[4823]: I0126 15:45:03.921007 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490660-k4l9l"] Jan 26 15:45:04 crc kubenswrapper[4823]: I0126 15:45:04.508801 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:45:04 crc kubenswrapper[4823]: I0126 15:45:04.508866 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:45:05 crc kubenswrapper[4823]: I0126 15:45:05.575818 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a95fb51d-24d9-42e0-a51c-18314aadfb14" path="/var/lib/kubelet/pods/a95fb51d-24d9-42e0-a51c-18314aadfb14/volumes" Jan 26 15:45:07 crc kubenswrapper[4823]: I0126 15:45:07.318102 4823 scope.go:117] "RemoveContainer" containerID="687b926217a3cc4f22b6e8c6a17a347b77024d492b783d3a7473253a968a51ab" Jan 26 15:45:34 crc kubenswrapper[4823]: I0126 15:45:34.508211 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:45:34 crc kubenswrapper[4823]: I0126 15:45:34.508685 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:46:04 crc kubenswrapper[4823]: I0126 15:46:04.516891 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:46:04 crc kubenswrapper[4823]: I0126 15:46:04.517502 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:46:04 crc kubenswrapper[4823]: I0126 15:46:04.517563 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 15:46:04 crc kubenswrapper[4823]: I0126 15:46:04.518498 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"949d4561f7ecaa2ea1507042cfa385d35a3f18b3ab1d6f9e7dc89ab248eb28ac"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:46:04 crc kubenswrapper[4823]: I0126 15:46:04.518563 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://949d4561f7ecaa2ea1507042cfa385d35a3f18b3ab1d6f9e7dc89ab248eb28ac" gracePeriod=600 Jan 26 15:46:04 crc kubenswrapper[4823]: I0126 15:46:04.977282 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="949d4561f7ecaa2ea1507042cfa385d35a3f18b3ab1d6f9e7dc89ab248eb28ac" exitCode=0 Jan 26 15:46:04 crc kubenswrapper[4823]: I0126 15:46:04.977398 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"949d4561f7ecaa2ea1507042cfa385d35a3f18b3ab1d6f9e7dc89ab248eb28ac"} Jan 26 15:46:04 crc kubenswrapper[4823]: I0126 15:46:04.977554 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11"} Jan 26 15:46:04 crc kubenswrapper[4823]: I0126 15:46:04.977576 4823 scope.go:117] "RemoveContainer" containerID="9bbbe613d4b45e9ceabd568ff90fc77495e7850e1a550ec75fdf22fe62987a58" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.175561 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bj6zv"] Jan 26 15:47:35 crc kubenswrapper[4823]: E0126 15:47:35.176664 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1affab3-fe81-427e-a854-2f53a8f705f1" containerName="collect-profiles" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.176682 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1affab3-fe81-427e-a854-2f53a8f705f1" containerName="collect-profiles" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.176968 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1affab3-fe81-427e-a854-2f53a8f705f1" containerName="collect-profiles" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.178656 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.197310 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bj6zv"] Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.372762 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0498a66c-99c2-4130-8750-d204550abc34-catalog-content\") pod \"certified-operators-bj6zv\" (UID: \"0498a66c-99c2-4130-8750-d204550abc34\") " pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.373236 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pfvw\" (UniqueName: \"kubernetes.io/projected/0498a66c-99c2-4130-8750-d204550abc34-kube-api-access-2pfvw\") pod \"certified-operators-bj6zv\" (UID: \"0498a66c-99c2-4130-8750-d204550abc34\") " pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.373334 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0498a66c-99c2-4130-8750-d204550abc34-utilities\") pod \"certified-operators-bj6zv\" (UID: \"0498a66c-99c2-4130-8750-d204550abc34\") " pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.476198 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0498a66c-99c2-4130-8750-d204550abc34-catalog-content\") pod \"certified-operators-bj6zv\" (UID: \"0498a66c-99c2-4130-8750-d204550abc34\") " pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.476882 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0498a66c-99c2-4130-8750-d204550abc34-catalog-content\") pod \"certified-operators-bj6zv\" (UID: \"0498a66c-99c2-4130-8750-d204550abc34\") " pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.477098 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pfvw\" (UniqueName: \"kubernetes.io/projected/0498a66c-99c2-4130-8750-d204550abc34-kube-api-access-2pfvw\") pod \"certified-operators-bj6zv\" (UID: \"0498a66c-99c2-4130-8750-d204550abc34\") " pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.477227 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0498a66c-99c2-4130-8750-d204550abc34-utilities\") pod \"certified-operators-bj6zv\" (UID: \"0498a66c-99c2-4130-8750-d204550abc34\") " pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.477580 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0498a66c-99c2-4130-8750-d204550abc34-utilities\") pod \"certified-operators-bj6zv\" (UID: \"0498a66c-99c2-4130-8750-d204550abc34\") " pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.500317 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pfvw\" (UniqueName: \"kubernetes.io/projected/0498a66c-99c2-4130-8750-d204550abc34-kube-api-access-2pfvw\") pod \"certified-operators-bj6zv\" (UID: \"0498a66c-99c2-4130-8750-d204550abc34\") " pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:35 crc kubenswrapper[4823]: I0126 15:47:35.536415 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:36 crc kubenswrapper[4823]: I0126 15:47:36.065463 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bj6zv"] Jan 26 15:47:36 crc kubenswrapper[4823]: I0126 15:47:36.800447 4823 generic.go:334] "Generic (PLEG): container finished" podID="0498a66c-99c2-4130-8750-d204550abc34" containerID="86e49cf89289d6d4b60ba0260f4100a799b37981df3a9b3a84e39499fff0575a" exitCode=0 Jan 26 15:47:36 crc kubenswrapper[4823]: I0126 15:47:36.800609 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj6zv" event={"ID":"0498a66c-99c2-4130-8750-d204550abc34","Type":"ContainerDied","Data":"86e49cf89289d6d4b60ba0260f4100a799b37981df3a9b3a84e39499fff0575a"} Jan 26 15:47:36 crc kubenswrapper[4823]: I0126 15:47:36.800841 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj6zv" event={"ID":"0498a66c-99c2-4130-8750-d204550abc34","Type":"ContainerStarted","Data":"e6e12b9682486f2691a78775cf943a4833e631c294252c5ff77be7bbfe7d9b36"} Jan 26 15:47:36 crc kubenswrapper[4823]: I0126 15:47:36.804064 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:47:37 crc kubenswrapper[4823]: I0126 15:47:37.809928 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj6zv" event={"ID":"0498a66c-99c2-4130-8750-d204550abc34","Type":"ContainerStarted","Data":"bca990bcc48ad0f013f12a0b99b2f657abd595f19460a729b5f378626837e356"} Jan 26 15:47:38 crc kubenswrapper[4823]: I0126 15:47:38.819498 4823 generic.go:334] "Generic (PLEG): container finished" podID="0498a66c-99c2-4130-8750-d204550abc34" containerID="bca990bcc48ad0f013f12a0b99b2f657abd595f19460a729b5f378626837e356" exitCode=0 Jan 26 15:47:38 crc kubenswrapper[4823]: I0126 15:47:38.819604 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj6zv" event={"ID":"0498a66c-99c2-4130-8750-d204550abc34","Type":"ContainerDied","Data":"bca990bcc48ad0f013f12a0b99b2f657abd595f19460a729b5f378626837e356"} Jan 26 15:47:40 crc kubenswrapper[4823]: I0126 15:47:40.839687 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj6zv" event={"ID":"0498a66c-99c2-4130-8750-d204550abc34","Type":"ContainerStarted","Data":"a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa"} Jan 26 15:47:40 crc kubenswrapper[4823]: I0126 15:47:40.869647 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bj6zv" podStartSLOduration=2.575930503 podStartE2EDuration="5.869619995s" podCreationTimestamp="2026-01-26 15:47:35 +0000 UTC" firstStartedPulling="2026-01-26 15:47:36.803797252 +0000 UTC m=+3653.489260357" lastFinishedPulling="2026-01-26 15:47:40.097486744 +0000 UTC m=+3656.782949849" observedRunningTime="2026-01-26 15:47:40.857453925 +0000 UTC m=+3657.542917050" watchObservedRunningTime="2026-01-26 15:47:40.869619995 +0000 UTC m=+3657.555083110" Jan 26 15:47:45 crc kubenswrapper[4823]: I0126 15:47:45.537251 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:45 crc kubenswrapper[4823]: I0126 15:47:45.539081 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:45 crc kubenswrapper[4823]: I0126 15:47:45.589914 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:45 crc kubenswrapper[4823]: I0126 15:47:45.927060 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:45 crc kubenswrapper[4823]: I0126 15:47:45.977733 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bj6zv"] Jan 26 15:47:47 crc kubenswrapper[4823]: I0126 15:47:47.902260 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bj6zv" podUID="0498a66c-99c2-4130-8750-d204550abc34" containerName="registry-server" containerID="cri-o://a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa" gracePeriod=2 Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.634694 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.657122 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pfvw\" (UniqueName: \"kubernetes.io/projected/0498a66c-99c2-4130-8750-d204550abc34-kube-api-access-2pfvw\") pod \"0498a66c-99c2-4130-8750-d204550abc34\" (UID: \"0498a66c-99c2-4130-8750-d204550abc34\") " Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.657230 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0498a66c-99c2-4130-8750-d204550abc34-utilities\") pod \"0498a66c-99c2-4130-8750-d204550abc34\" (UID: \"0498a66c-99c2-4130-8750-d204550abc34\") " Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.657409 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0498a66c-99c2-4130-8750-d204550abc34-catalog-content\") pod \"0498a66c-99c2-4130-8750-d204550abc34\" (UID: \"0498a66c-99c2-4130-8750-d204550abc34\") " Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.658494 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0498a66c-99c2-4130-8750-d204550abc34-utilities" (OuterVolumeSpecName: "utilities") pod "0498a66c-99c2-4130-8750-d204550abc34" (UID: "0498a66c-99c2-4130-8750-d204550abc34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.670771 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0498a66c-99c2-4130-8750-d204550abc34-kube-api-access-2pfvw" (OuterVolumeSpecName: "kube-api-access-2pfvw") pod "0498a66c-99c2-4130-8750-d204550abc34" (UID: "0498a66c-99c2-4130-8750-d204550abc34"). InnerVolumeSpecName "kube-api-access-2pfvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.760121 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0498a66c-99c2-4130-8750-d204550abc34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0498a66c-99c2-4130-8750-d204550abc34" (UID: "0498a66c-99c2-4130-8750-d204550abc34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.773230 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0498a66c-99c2-4130-8750-d204550abc34-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.773289 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0498a66c-99c2-4130-8750-d204550abc34-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.773305 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pfvw\" (UniqueName: \"kubernetes.io/projected/0498a66c-99c2-4130-8750-d204550abc34-kube-api-access-2pfvw\") on node \"crc\" DevicePath \"\"" Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.914987 4823 generic.go:334] "Generic (PLEG): container finished" podID="0498a66c-99c2-4130-8750-d204550abc34" containerID="a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa" exitCode=0 Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.915075 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj6zv" event={"ID":"0498a66c-99c2-4130-8750-d204550abc34","Type":"ContainerDied","Data":"a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa"} Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.915388 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bj6zv" event={"ID":"0498a66c-99c2-4130-8750-d204550abc34","Type":"ContainerDied","Data":"e6e12b9682486f2691a78775cf943a4833e631c294252c5ff77be7bbfe7d9b36"} Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.915216 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bj6zv" Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.915412 4823 scope.go:117] "RemoveContainer" containerID="a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa" Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.945046 4823 scope.go:117] "RemoveContainer" containerID="bca990bcc48ad0f013f12a0b99b2f657abd595f19460a729b5f378626837e356" Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.962511 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bj6zv"] Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.975356 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bj6zv"] Jan 26 15:47:48 crc kubenswrapper[4823]: I0126 15:47:48.977186 4823 scope.go:117] "RemoveContainer" containerID="86e49cf89289d6d4b60ba0260f4100a799b37981df3a9b3a84e39499fff0575a" Jan 26 15:47:49 crc kubenswrapper[4823]: I0126 15:47:49.018429 4823 scope.go:117] "RemoveContainer" containerID="a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa" Jan 26 15:47:49 crc kubenswrapper[4823]: E0126 15:47:49.018982 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa\": container with ID starting with a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa not found: ID does not exist" containerID="a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa" Jan 26 15:47:49 crc kubenswrapper[4823]: I0126 15:47:49.019035 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa"} err="failed to get container status \"a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa\": rpc error: code = NotFound desc = could not find container \"a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa\": container with ID starting with a1555074091d8f53266107a17a0dfe45d7f6f03c0192a46e2307236df2fabaaa not found: ID does not exist" Jan 26 15:47:49 crc kubenswrapper[4823]: I0126 15:47:49.019069 4823 scope.go:117] "RemoveContainer" containerID="bca990bcc48ad0f013f12a0b99b2f657abd595f19460a729b5f378626837e356" Jan 26 15:47:49 crc kubenswrapper[4823]: E0126 15:47:49.019333 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bca990bcc48ad0f013f12a0b99b2f657abd595f19460a729b5f378626837e356\": container with ID starting with bca990bcc48ad0f013f12a0b99b2f657abd595f19460a729b5f378626837e356 not found: ID does not exist" containerID="bca990bcc48ad0f013f12a0b99b2f657abd595f19460a729b5f378626837e356" Jan 26 15:47:49 crc kubenswrapper[4823]: I0126 15:47:49.019357 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bca990bcc48ad0f013f12a0b99b2f657abd595f19460a729b5f378626837e356"} err="failed to get container status \"bca990bcc48ad0f013f12a0b99b2f657abd595f19460a729b5f378626837e356\": rpc error: code = NotFound desc = could not find container \"bca990bcc48ad0f013f12a0b99b2f657abd595f19460a729b5f378626837e356\": container with ID starting with bca990bcc48ad0f013f12a0b99b2f657abd595f19460a729b5f378626837e356 not found: ID does not exist" Jan 26 15:47:49 crc kubenswrapper[4823]: I0126 15:47:49.019391 4823 scope.go:117] "RemoveContainer" containerID="86e49cf89289d6d4b60ba0260f4100a799b37981df3a9b3a84e39499fff0575a" Jan 26 15:47:49 crc kubenswrapper[4823]: E0126 15:47:49.019743 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86e49cf89289d6d4b60ba0260f4100a799b37981df3a9b3a84e39499fff0575a\": container with ID starting with 86e49cf89289d6d4b60ba0260f4100a799b37981df3a9b3a84e39499fff0575a not found: ID does not exist" containerID="86e49cf89289d6d4b60ba0260f4100a799b37981df3a9b3a84e39499fff0575a" Jan 26 15:47:49 crc kubenswrapper[4823]: I0126 15:47:49.019839 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86e49cf89289d6d4b60ba0260f4100a799b37981df3a9b3a84e39499fff0575a"} err="failed to get container status \"86e49cf89289d6d4b60ba0260f4100a799b37981df3a9b3a84e39499fff0575a\": rpc error: code = NotFound desc = could not find container \"86e49cf89289d6d4b60ba0260f4100a799b37981df3a9b3a84e39499fff0575a\": container with ID starting with 86e49cf89289d6d4b60ba0260f4100a799b37981df3a9b3a84e39499fff0575a not found: ID does not exist" Jan 26 15:47:49 crc kubenswrapper[4823]: I0126 15:47:49.575810 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0498a66c-99c2-4130-8750-d204550abc34" path="/var/lib/kubelet/pods/0498a66c-99c2-4130-8750-d204550abc34/volumes" Jan 26 15:48:04 crc kubenswrapper[4823]: I0126 15:48:04.508485 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:48:04 crc kubenswrapper[4823]: I0126 15:48:04.509035 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:48:34 crc kubenswrapper[4823]: I0126 15:48:34.508580 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:48:34 crc kubenswrapper[4823]: I0126 15:48:34.509210 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:49:04 crc kubenswrapper[4823]: I0126 15:49:04.527316 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:49:04 crc kubenswrapper[4823]: I0126 15:49:04.527905 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:49:04 crc kubenswrapper[4823]: I0126 15:49:04.527971 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 15:49:04 crc kubenswrapper[4823]: I0126 15:49:04.529316 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:49:04 crc kubenswrapper[4823]: I0126 15:49:04.529397 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" gracePeriod=600 Jan 26 15:49:04 crc kubenswrapper[4823]: E0126 15:49:04.658723 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:49:04 crc kubenswrapper[4823]: E0126 15:49:04.685208 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a3a166e_bc51_4f3e_baf7_9a9d3cd4e85d.slice/crio-7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a3a166e_bc51_4f3e_baf7_9a9d3cd4e85d.slice/crio-conmon-7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:49:05 crc kubenswrapper[4823]: I0126 15:49:05.568736 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" exitCode=0 Jan 26 15:49:05 crc kubenswrapper[4823]: I0126 15:49:05.571875 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11"} Jan 26 15:49:05 crc kubenswrapper[4823]: I0126 15:49:05.572060 4823 scope.go:117] "RemoveContainer" containerID="949d4561f7ecaa2ea1507042cfa385d35a3f18b3ab1d6f9e7dc89ab248eb28ac" Jan 26 15:49:05 crc kubenswrapper[4823]: I0126 15:49:05.572769 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:49:05 crc kubenswrapper[4823]: E0126 15:49:05.573167 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:49:19 crc kubenswrapper[4823]: I0126 15:49:19.565885 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:49:19 crc kubenswrapper[4823]: E0126 15:49:19.566675 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:49:34 crc kubenswrapper[4823]: I0126 15:49:34.560081 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:49:34 crc kubenswrapper[4823]: E0126 15:49:34.560874 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:49:38 crc kubenswrapper[4823]: I0126 15:49:38.035276 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-e4e2-account-create-update-rg4zc"] Jan 26 15:49:38 crc kubenswrapper[4823]: I0126 15:49:38.043800 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-e4e2-account-create-update-rg4zc"] Jan 26 15:49:39 crc kubenswrapper[4823]: I0126 15:49:39.035583 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-hc2s9"] Jan 26 15:49:39 crc kubenswrapper[4823]: I0126 15:49:39.046021 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-hc2s9"] Jan 26 15:49:39 crc kubenswrapper[4823]: I0126 15:49:39.571588 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a7ff82-985c-4819-997e-6624e6bdcffc" path="/var/lib/kubelet/pods/48a7ff82-985c-4819-997e-6624e6bdcffc/volumes" Jan 26 15:49:39 crc kubenswrapper[4823]: I0126 15:49:39.572718 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de86d52a-13e3-4228-99ab-3e47c27432f8" path="/var/lib/kubelet/pods/de86d52a-13e3-4228-99ab-3e47c27432f8/volumes" Jan 26 15:49:48 crc kubenswrapper[4823]: I0126 15:49:48.561116 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:49:48 crc kubenswrapper[4823]: E0126 15:49:48.562003 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:49:59 crc kubenswrapper[4823]: I0126 15:49:59.564483 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:49:59 crc kubenswrapper[4823]: E0126 15:49:59.565146 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:50:03 crc kubenswrapper[4823]: I0126 15:50:03.043351 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-7ws9r"] Jan 26 15:50:03 crc kubenswrapper[4823]: I0126 15:50:03.051878 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-7ws9r"] Jan 26 15:50:03 crc kubenswrapper[4823]: I0126 15:50:03.599614 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d687761-776a-49bd-ab09-d5672e514edc" path="/var/lib/kubelet/pods/8d687761-776a-49bd-ab09-d5672e514edc/volumes" Jan 26 15:50:07 crc kubenswrapper[4823]: I0126 15:50:07.474348 4823 scope.go:117] "RemoveContainer" containerID="23df354ddc1ee2877e69aed6aefaf412469aa03a2ed91d8acea089341b723cee" Jan 26 15:50:07 crc kubenswrapper[4823]: I0126 15:50:07.524355 4823 scope.go:117] "RemoveContainer" containerID="855fe1999b46e27a25b8e12d505cc27fa6c1caef786cdddc7587444a91846592" Jan 26 15:50:07 crc kubenswrapper[4823]: I0126 15:50:07.572245 4823 scope.go:117] "RemoveContainer" containerID="5fb8157f74a685e0bae4d57bb3787cf9f8186e6f7219e1ea08f7f5b975829bf2" Jan 26 15:50:12 crc kubenswrapper[4823]: I0126 15:50:12.560442 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:50:12 crc kubenswrapper[4823]: E0126 15:50:12.561209 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:50:26 crc kubenswrapper[4823]: I0126 15:50:26.560692 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:50:26 crc kubenswrapper[4823]: E0126 15:50:26.561561 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:50:40 crc kubenswrapper[4823]: I0126 15:50:40.560286 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:50:40 crc kubenswrapper[4823]: E0126 15:50:40.561038 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:50:51 crc kubenswrapper[4823]: I0126 15:50:51.560862 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:50:51 crc kubenswrapper[4823]: E0126 15:50:51.561623 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:51:06 crc kubenswrapper[4823]: I0126 15:51:06.560819 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:51:06 crc kubenswrapper[4823]: E0126 15:51:06.562527 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:51:18 crc kubenswrapper[4823]: I0126 15:51:18.560373 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:51:18 crc kubenswrapper[4823]: E0126 15:51:18.561151 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:51:31 crc kubenswrapper[4823]: I0126 15:51:31.561148 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:51:31 crc kubenswrapper[4823]: E0126 15:51:31.562094 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:51:44 crc kubenswrapper[4823]: I0126 15:51:44.560997 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:51:44 crc kubenswrapper[4823]: E0126 15:51:44.561836 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:51:59 crc kubenswrapper[4823]: I0126 15:51:59.583421 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:51:59 crc kubenswrapper[4823]: E0126 15:51:59.585106 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:52:14 crc kubenswrapper[4823]: I0126 15:52:14.561041 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:52:14 crc kubenswrapper[4823]: E0126 15:52:14.562025 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:52:29 crc kubenswrapper[4823]: I0126 15:52:29.559944 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:52:29 crc kubenswrapper[4823]: E0126 15:52:29.560641 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:52:43 crc kubenswrapper[4823]: I0126 15:52:43.560276 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:52:43 crc kubenswrapper[4823]: E0126 15:52:43.561036 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:52:58 crc kubenswrapper[4823]: I0126 15:52:58.560933 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:52:58 crc kubenswrapper[4823]: E0126 15:52:58.561820 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:53:10 crc kubenswrapper[4823]: I0126 15:53:10.560903 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:53:10 crc kubenswrapper[4823]: E0126 15:53:10.561821 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:53:23 crc kubenswrapper[4823]: I0126 15:53:23.569068 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:53:23 crc kubenswrapper[4823]: E0126 15:53:23.570094 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:53:35 crc kubenswrapper[4823]: I0126 15:53:35.560156 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:53:35 crc kubenswrapper[4823]: E0126 15:53:35.561010 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:53:49 crc kubenswrapper[4823]: I0126 15:53:49.560687 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:53:49 crc kubenswrapper[4823]: E0126 15:53:49.561416 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:54:00 crc kubenswrapper[4823]: I0126 15:54:00.560609 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:54:00 crc kubenswrapper[4823]: E0126 15:54:00.561488 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 15:54:07 crc kubenswrapper[4823]: I0126 15:54:07.946741 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nhd4m"] Jan 26 15:54:07 crc kubenswrapper[4823]: E0126 15:54:07.949091 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0498a66c-99c2-4130-8750-d204550abc34" containerName="extract-utilities" Jan 26 15:54:07 crc kubenswrapper[4823]: I0126 15:54:07.949326 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0498a66c-99c2-4130-8750-d204550abc34" containerName="extract-utilities" Jan 26 15:54:07 crc kubenswrapper[4823]: E0126 15:54:07.949424 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0498a66c-99c2-4130-8750-d204550abc34" containerName="extract-content" Jan 26 15:54:07 crc kubenswrapper[4823]: I0126 15:54:07.949482 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0498a66c-99c2-4130-8750-d204550abc34" containerName="extract-content" Jan 26 15:54:07 crc kubenswrapper[4823]: E0126 15:54:07.949537 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0498a66c-99c2-4130-8750-d204550abc34" containerName="registry-server" Jan 26 15:54:07 crc kubenswrapper[4823]: I0126 15:54:07.949592 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0498a66c-99c2-4130-8750-d204550abc34" containerName="registry-server" Jan 26 15:54:07 crc kubenswrapper[4823]: I0126 15:54:07.949823 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0498a66c-99c2-4130-8750-d204550abc34" containerName="registry-server" Jan 26 15:54:07 crc kubenswrapper[4823]: I0126 15:54:07.951242 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:07 crc kubenswrapper[4823]: I0126 15:54:07.973557 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nhd4m"] Jan 26 15:54:08 crc kubenswrapper[4823]: I0126 15:54:08.020730 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmclk\" (UniqueName: \"kubernetes.io/projected/f9033f8d-91b4-4006-a320-743be7cf6054-kube-api-access-bmclk\") pod \"redhat-operators-nhd4m\" (UID: \"f9033f8d-91b4-4006-a320-743be7cf6054\") " pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:08 crc kubenswrapper[4823]: I0126 15:54:08.020796 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9033f8d-91b4-4006-a320-743be7cf6054-utilities\") pod \"redhat-operators-nhd4m\" (UID: \"f9033f8d-91b4-4006-a320-743be7cf6054\") " pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:08 crc kubenswrapper[4823]: I0126 15:54:08.020956 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9033f8d-91b4-4006-a320-743be7cf6054-catalog-content\") pod \"redhat-operators-nhd4m\" (UID: \"f9033f8d-91b4-4006-a320-743be7cf6054\") " pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:08 crc kubenswrapper[4823]: I0126 15:54:08.123189 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmclk\" (UniqueName: \"kubernetes.io/projected/f9033f8d-91b4-4006-a320-743be7cf6054-kube-api-access-bmclk\") pod \"redhat-operators-nhd4m\" (UID: \"f9033f8d-91b4-4006-a320-743be7cf6054\") " pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:08 crc kubenswrapper[4823]: I0126 15:54:08.123498 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9033f8d-91b4-4006-a320-743be7cf6054-utilities\") pod \"redhat-operators-nhd4m\" (UID: \"f9033f8d-91b4-4006-a320-743be7cf6054\") " pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:08 crc kubenswrapper[4823]: I0126 15:54:08.123637 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9033f8d-91b4-4006-a320-743be7cf6054-catalog-content\") pod \"redhat-operators-nhd4m\" (UID: \"f9033f8d-91b4-4006-a320-743be7cf6054\") " pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:08 crc kubenswrapper[4823]: I0126 15:54:08.124082 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9033f8d-91b4-4006-a320-743be7cf6054-utilities\") pod \"redhat-operators-nhd4m\" (UID: \"f9033f8d-91b4-4006-a320-743be7cf6054\") " pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:08 crc kubenswrapper[4823]: I0126 15:54:08.124169 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9033f8d-91b4-4006-a320-743be7cf6054-catalog-content\") pod \"redhat-operators-nhd4m\" (UID: \"f9033f8d-91b4-4006-a320-743be7cf6054\") " pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:08 crc kubenswrapper[4823]: I0126 15:54:08.144803 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmclk\" (UniqueName: \"kubernetes.io/projected/f9033f8d-91b4-4006-a320-743be7cf6054-kube-api-access-bmclk\") pod \"redhat-operators-nhd4m\" (UID: \"f9033f8d-91b4-4006-a320-743be7cf6054\") " pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:08 crc kubenswrapper[4823]: I0126 15:54:08.271004 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:08 crc kubenswrapper[4823]: I0126 15:54:08.815338 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nhd4m"] Jan 26 15:54:09 crc kubenswrapper[4823]: I0126 15:54:09.703722 4823 generic.go:334] "Generic (PLEG): container finished" podID="f9033f8d-91b4-4006-a320-743be7cf6054" containerID="2904563a2903a1c841bf792b98ed0c3837bfe277836bfe8130fc86c13d53243d" exitCode=0 Jan 26 15:54:09 crc kubenswrapper[4823]: I0126 15:54:09.703830 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhd4m" event={"ID":"f9033f8d-91b4-4006-a320-743be7cf6054","Type":"ContainerDied","Data":"2904563a2903a1c841bf792b98ed0c3837bfe277836bfe8130fc86c13d53243d"} Jan 26 15:54:09 crc kubenswrapper[4823]: I0126 15:54:09.704037 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhd4m" event={"ID":"f9033f8d-91b4-4006-a320-743be7cf6054","Type":"ContainerStarted","Data":"c6ed5ba23d86cc45e44cacf878894a5e84eb17ac84481a8589e6408cedb88eb3"} Jan 26 15:54:09 crc kubenswrapper[4823]: I0126 15:54:09.706888 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 15:54:11 crc kubenswrapper[4823]: I0126 15:54:11.724227 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhd4m" event={"ID":"f9033f8d-91b4-4006-a320-743be7cf6054","Type":"ContainerStarted","Data":"1cce5052dc9aa50650d00645facb5558ec97dfda17fff66afdd0ad2431df23cd"} Jan 26 15:54:12 crc kubenswrapper[4823]: I0126 15:54:12.560467 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:54:12 crc kubenswrapper[4823]: I0126 15:54:12.735922 4823 generic.go:334] "Generic (PLEG): container finished" podID="f9033f8d-91b4-4006-a320-743be7cf6054" containerID="1cce5052dc9aa50650d00645facb5558ec97dfda17fff66afdd0ad2431df23cd" exitCode=0 Jan 26 15:54:12 crc kubenswrapper[4823]: I0126 15:54:12.736168 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhd4m" event={"ID":"f9033f8d-91b4-4006-a320-743be7cf6054","Type":"ContainerDied","Data":"1cce5052dc9aa50650d00645facb5558ec97dfda17fff66afdd0ad2431df23cd"} Jan 26 15:54:13 crc kubenswrapper[4823]: I0126 15:54:13.750992 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"7b505329c074aeef28c22d08978adecb28c0d16d61263596e047e56f449cc0e8"} Jan 26 15:54:14 crc kubenswrapper[4823]: I0126 15:54:14.764038 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhd4m" event={"ID":"f9033f8d-91b4-4006-a320-743be7cf6054","Type":"ContainerStarted","Data":"b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469"} Jan 26 15:54:14 crc kubenswrapper[4823]: I0126 15:54:14.930524 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nhd4m" podStartSLOduration=3.846034233 podStartE2EDuration="7.930477983s" podCreationTimestamp="2026-01-26 15:54:07 +0000 UTC" firstStartedPulling="2026-01-26 15:54:09.706603267 +0000 UTC m=+4046.392066372" lastFinishedPulling="2026-01-26 15:54:13.791047017 +0000 UTC m=+4050.476510122" observedRunningTime="2026-01-26 15:54:14.926604178 +0000 UTC m=+4051.612067313" watchObservedRunningTime="2026-01-26 15:54:14.930477983 +0000 UTC m=+4051.615941088" Jan 26 15:54:18 crc kubenswrapper[4823]: I0126 15:54:18.272130 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:18 crc kubenswrapper[4823]: I0126 15:54:18.272728 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:19 crc kubenswrapper[4823]: I0126 15:54:19.323110 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nhd4m" podUID="f9033f8d-91b4-4006-a320-743be7cf6054" containerName="registry-server" probeResult="failure" output=< Jan 26 15:54:19 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 15:54:19 crc kubenswrapper[4823]: > Jan 26 15:54:28 crc kubenswrapper[4823]: I0126 15:54:28.427795 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:28 crc kubenswrapper[4823]: I0126 15:54:28.489294 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:28 crc kubenswrapper[4823]: I0126 15:54:28.670969 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nhd4m"] Jan 26 15:54:29 crc kubenswrapper[4823]: I0126 15:54:29.887124 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nhd4m" podUID="f9033f8d-91b4-4006-a320-743be7cf6054" containerName="registry-server" containerID="cri-o://b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469" gracePeriod=2 Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.566374 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.755644 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9033f8d-91b4-4006-a320-743be7cf6054-catalog-content\") pod \"f9033f8d-91b4-4006-a320-743be7cf6054\" (UID: \"f9033f8d-91b4-4006-a320-743be7cf6054\") " Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.755931 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmclk\" (UniqueName: \"kubernetes.io/projected/f9033f8d-91b4-4006-a320-743be7cf6054-kube-api-access-bmclk\") pod \"f9033f8d-91b4-4006-a320-743be7cf6054\" (UID: \"f9033f8d-91b4-4006-a320-743be7cf6054\") " Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.756310 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9033f8d-91b4-4006-a320-743be7cf6054-utilities\") pod \"f9033f8d-91b4-4006-a320-743be7cf6054\" (UID: \"f9033f8d-91b4-4006-a320-743be7cf6054\") " Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.756920 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9033f8d-91b4-4006-a320-743be7cf6054-utilities" (OuterVolumeSpecName: "utilities") pod "f9033f8d-91b4-4006-a320-743be7cf6054" (UID: "f9033f8d-91b4-4006-a320-743be7cf6054"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.757553 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9033f8d-91b4-4006-a320-743be7cf6054-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.763154 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9033f8d-91b4-4006-a320-743be7cf6054-kube-api-access-bmclk" (OuterVolumeSpecName: "kube-api-access-bmclk") pod "f9033f8d-91b4-4006-a320-743be7cf6054" (UID: "f9033f8d-91b4-4006-a320-743be7cf6054"). InnerVolumeSpecName "kube-api-access-bmclk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.859741 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmclk\" (UniqueName: \"kubernetes.io/projected/f9033f8d-91b4-4006-a320-743be7cf6054-kube-api-access-bmclk\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.878285 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9033f8d-91b4-4006-a320-743be7cf6054-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9033f8d-91b4-4006-a320-743be7cf6054" (UID: "f9033f8d-91b4-4006-a320-743be7cf6054"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.896778 4823 generic.go:334] "Generic (PLEG): container finished" podID="f9033f8d-91b4-4006-a320-743be7cf6054" containerID="b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469" exitCode=0 Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.896851 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhd4m" event={"ID":"f9033f8d-91b4-4006-a320-743be7cf6054","Type":"ContainerDied","Data":"b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469"} Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.897607 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nhd4m" event={"ID":"f9033f8d-91b4-4006-a320-743be7cf6054","Type":"ContainerDied","Data":"c6ed5ba23d86cc45e44cacf878894a5e84eb17ac84481a8589e6408cedb88eb3"} Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.896869 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nhd4m" Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.897647 4823 scope.go:117] "RemoveContainer" containerID="b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469" Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.919739 4823 scope.go:117] "RemoveContainer" containerID="1cce5052dc9aa50650d00645facb5558ec97dfda17fff66afdd0ad2431df23cd" Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.934140 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nhd4m"] Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.943181 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nhd4m"] Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.956524 4823 scope.go:117] "RemoveContainer" containerID="2904563a2903a1c841bf792b98ed0c3837bfe277836bfe8130fc86c13d53243d" Jan 26 15:54:30 crc kubenswrapper[4823]: I0126 15:54:30.964551 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9033f8d-91b4-4006-a320-743be7cf6054-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:54:31 crc kubenswrapper[4823]: I0126 15:54:31.005681 4823 scope.go:117] "RemoveContainer" containerID="b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469" Jan 26 15:54:31 crc kubenswrapper[4823]: E0126 15:54:31.006309 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469\": container with ID starting with b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469 not found: ID does not exist" containerID="b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469" Jan 26 15:54:31 crc kubenswrapper[4823]: I0126 15:54:31.006339 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469"} err="failed to get container status \"b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469\": rpc error: code = NotFound desc = could not find container \"b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469\": container with ID starting with b81777d7b7b95ddf992fc8046b8e12ced1e1d0a7a5746ca3a1006e06281f9469 not found: ID does not exist" Jan 26 15:54:31 crc kubenswrapper[4823]: I0126 15:54:31.006376 4823 scope.go:117] "RemoveContainer" containerID="1cce5052dc9aa50650d00645facb5558ec97dfda17fff66afdd0ad2431df23cd" Jan 26 15:54:31 crc kubenswrapper[4823]: E0126 15:54:31.006905 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cce5052dc9aa50650d00645facb5558ec97dfda17fff66afdd0ad2431df23cd\": container with ID starting with 1cce5052dc9aa50650d00645facb5558ec97dfda17fff66afdd0ad2431df23cd not found: ID does not exist" containerID="1cce5052dc9aa50650d00645facb5558ec97dfda17fff66afdd0ad2431df23cd" Jan 26 15:54:31 crc kubenswrapper[4823]: I0126 15:54:31.006955 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cce5052dc9aa50650d00645facb5558ec97dfda17fff66afdd0ad2431df23cd"} err="failed to get container status \"1cce5052dc9aa50650d00645facb5558ec97dfda17fff66afdd0ad2431df23cd\": rpc error: code = NotFound desc = could not find container \"1cce5052dc9aa50650d00645facb5558ec97dfda17fff66afdd0ad2431df23cd\": container with ID starting with 1cce5052dc9aa50650d00645facb5558ec97dfda17fff66afdd0ad2431df23cd not found: ID does not exist" Jan 26 15:54:31 crc kubenswrapper[4823]: I0126 15:54:31.006989 4823 scope.go:117] "RemoveContainer" containerID="2904563a2903a1c841bf792b98ed0c3837bfe277836bfe8130fc86c13d53243d" Jan 26 15:54:31 crc kubenswrapper[4823]: E0126 15:54:31.007461 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2904563a2903a1c841bf792b98ed0c3837bfe277836bfe8130fc86c13d53243d\": container with ID starting with 2904563a2903a1c841bf792b98ed0c3837bfe277836bfe8130fc86c13d53243d not found: ID does not exist" containerID="2904563a2903a1c841bf792b98ed0c3837bfe277836bfe8130fc86c13d53243d" Jan 26 15:54:31 crc kubenswrapper[4823]: I0126 15:54:31.007548 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2904563a2903a1c841bf792b98ed0c3837bfe277836bfe8130fc86c13d53243d"} err="failed to get container status \"2904563a2903a1c841bf792b98ed0c3837bfe277836bfe8130fc86c13d53243d\": rpc error: code = NotFound desc = could not find container \"2904563a2903a1c841bf792b98ed0c3837bfe277836bfe8130fc86c13d53243d\": container with ID starting with 2904563a2903a1c841bf792b98ed0c3837bfe277836bfe8130fc86c13d53243d not found: ID does not exist" Jan 26 15:54:31 crc kubenswrapper[4823]: I0126 15:54:31.571733 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9033f8d-91b4-4006-a320-743be7cf6054" path="/var/lib/kubelet/pods/f9033f8d-91b4-4006-a320-743be7cf6054/volumes" Jan 26 15:54:51 crc kubenswrapper[4823]: I0126 15:54:51.800031 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9sgws"] Jan 26 15:54:51 crc kubenswrapper[4823]: E0126 15:54:51.801074 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9033f8d-91b4-4006-a320-743be7cf6054" containerName="registry-server" Jan 26 15:54:51 crc kubenswrapper[4823]: I0126 15:54:51.801094 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9033f8d-91b4-4006-a320-743be7cf6054" containerName="registry-server" Jan 26 15:54:51 crc kubenswrapper[4823]: E0126 15:54:51.801112 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9033f8d-91b4-4006-a320-743be7cf6054" containerName="extract-content" Jan 26 15:54:51 crc kubenswrapper[4823]: I0126 15:54:51.801120 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9033f8d-91b4-4006-a320-743be7cf6054" containerName="extract-content" Jan 26 15:54:51 crc kubenswrapper[4823]: E0126 15:54:51.801139 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9033f8d-91b4-4006-a320-743be7cf6054" containerName="extract-utilities" Jan 26 15:54:51 crc kubenswrapper[4823]: I0126 15:54:51.801147 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9033f8d-91b4-4006-a320-743be7cf6054" containerName="extract-utilities" Jan 26 15:54:51 crc kubenswrapper[4823]: I0126 15:54:51.801425 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9033f8d-91b4-4006-a320-743be7cf6054" containerName="registry-server" Jan 26 15:54:51 crc kubenswrapper[4823]: I0126 15:54:51.803797 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:54:51 crc kubenswrapper[4823]: I0126 15:54:51.815498 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9sgws"] Jan 26 15:54:51 crc kubenswrapper[4823]: I0126 15:54:51.899801 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c332bf84-2745-4603-a3ab-ee64e0641725-utilities\") pod \"community-operators-9sgws\" (UID: \"c332bf84-2745-4603-a3ab-ee64e0641725\") " pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:54:51 crc kubenswrapper[4823]: I0126 15:54:51.900206 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd8jq\" (UniqueName: \"kubernetes.io/projected/c332bf84-2745-4603-a3ab-ee64e0641725-kube-api-access-zd8jq\") pod \"community-operators-9sgws\" (UID: \"c332bf84-2745-4603-a3ab-ee64e0641725\") " pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:54:51 crc kubenswrapper[4823]: I0126 15:54:51.900299 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c332bf84-2745-4603-a3ab-ee64e0641725-catalog-content\") pod \"community-operators-9sgws\" (UID: \"c332bf84-2745-4603-a3ab-ee64e0641725\") " pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:54:52 crc kubenswrapper[4823]: I0126 15:54:52.001780 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c332bf84-2745-4603-a3ab-ee64e0641725-utilities\") pod \"community-operators-9sgws\" (UID: \"c332bf84-2745-4603-a3ab-ee64e0641725\") " pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:54:52 crc kubenswrapper[4823]: I0126 15:54:52.001940 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd8jq\" (UniqueName: \"kubernetes.io/projected/c332bf84-2745-4603-a3ab-ee64e0641725-kube-api-access-zd8jq\") pod \"community-operators-9sgws\" (UID: \"c332bf84-2745-4603-a3ab-ee64e0641725\") " pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:54:52 crc kubenswrapper[4823]: I0126 15:54:52.001994 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c332bf84-2745-4603-a3ab-ee64e0641725-catalog-content\") pod \"community-operators-9sgws\" (UID: \"c332bf84-2745-4603-a3ab-ee64e0641725\") " pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:54:52 crc kubenswrapper[4823]: I0126 15:54:52.002637 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c332bf84-2745-4603-a3ab-ee64e0641725-utilities\") pod \"community-operators-9sgws\" (UID: \"c332bf84-2745-4603-a3ab-ee64e0641725\") " pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:54:52 crc kubenswrapper[4823]: I0126 15:54:52.003894 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c332bf84-2745-4603-a3ab-ee64e0641725-catalog-content\") pod \"community-operators-9sgws\" (UID: \"c332bf84-2745-4603-a3ab-ee64e0641725\") " pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:54:52 crc kubenswrapper[4823]: I0126 15:54:52.032385 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd8jq\" (UniqueName: \"kubernetes.io/projected/c332bf84-2745-4603-a3ab-ee64e0641725-kube-api-access-zd8jq\") pod \"community-operators-9sgws\" (UID: \"c332bf84-2745-4603-a3ab-ee64e0641725\") " pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:54:52 crc kubenswrapper[4823]: I0126 15:54:52.124070 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:54:52 crc kubenswrapper[4823]: I0126 15:54:52.670832 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9sgws"] Jan 26 15:54:53 crc kubenswrapper[4823]: E0126 15:54:53.041544 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc332bf84_2745_4603_a3ab_ee64e0641725.slice/crio-33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc332bf84_2745_4603_a3ab_ee64e0641725.slice/crio-conmon-33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df.scope\": RecentStats: unable to find data in memory cache]" Jan 26 15:54:53 crc kubenswrapper[4823]: I0126 15:54:53.085678 4823 generic.go:334] "Generic (PLEG): container finished" podID="c332bf84-2745-4603-a3ab-ee64e0641725" containerID="33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df" exitCode=0 Jan 26 15:54:53 crc kubenswrapper[4823]: I0126 15:54:53.085724 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sgws" event={"ID":"c332bf84-2745-4603-a3ab-ee64e0641725","Type":"ContainerDied","Data":"33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df"} Jan 26 15:54:53 crc kubenswrapper[4823]: I0126 15:54:53.086286 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sgws" event={"ID":"c332bf84-2745-4603-a3ab-ee64e0641725","Type":"ContainerStarted","Data":"e7a67ad6608e77129ad12cc41d0fc950852d8de39c51d93f98aec67cf86afeaf"} Jan 26 15:54:54 crc kubenswrapper[4823]: I0126 15:54:54.095325 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sgws" event={"ID":"c332bf84-2745-4603-a3ab-ee64e0641725","Type":"ContainerStarted","Data":"6216de8db2677ccf9726b098954cee08dab70497f3761a0e992c23098e92a492"} Jan 26 15:54:55 crc kubenswrapper[4823]: I0126 15:54:55.104051 4823 generic.go:334] "Generic (PLEG): container finished" podID="c332bf84-2745-4603-a3ab-ee64e0641725" containerID="6216de8db2677ccf9726b098954cee08dab70497f3761a0e992c23098e92a492" exitCode=0 Jan 26 15:54:55 crc kubenswrapper[4823]: I0126 15:54:55.104314 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sgws" event={"ID":"c332bf84-2745-4603-a3ab-ee64e0641725","Type":"ContainerDied","Data":"6216de8db2677ccf9726b098954cee08dab70497f3761a0e992c23098e92a492"} Jan 26 15:54:57 crc kubenswrapper[4823]: I0126 15:54:57.122081 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sgws" event={"ID":"c332bf84-2745-4603-a3ab-ee64e0641725","Type":"ContainerStarted","Data":"704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5"} Jan 26 15:54:57 crc kubenswrapper[4823]: I0126 15:54:57.150845 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9sgws" podStartSLOduration=3.640871645 podStartE2EDuration="6.15082461s" podCreationTimestamp="2026-01-26 15:54:51 +0000 UTC" firstStartedPulling="2026-01-26 15:54:53.088509401 +0000 UTC m=+4089.773972506" lastFinishedPulling="2026-01-26 15:54:55.598462366 +0000 UTC m=+4092.283925471" observedRunningTime="2026-01-26 15:54:57.144559298 +0000 UTC m=+4093.830022403" watchObservedRunningTime="2026-01-26 15:54:57.15082461 +0000 UTC m=+4093.836287715" Jan 26 15:55:02 crc kubenswrapper[4823]: I0126 15:55:02.124641 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:55:02 crc kubenswrapper[4823]: I0126 15:55:02.125211 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:55:02 crc kubenswrapper[4823]: I0126 15:55:02.171171 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:55:02 crc kubenswrapper[4823]: I0126 15:55:02.219793 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:55:02 crc kubenswrapper[4823]: I0126 15:55:02.411055 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9sgws"] Jan 26 15:55:04 crc kubenswrapper[4823]: I0126 15:55:04.174046 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9sgws" podUID="c332bf84-2745-4603-a3ab-ee64e0641725" containerName="registry-server" containerID="cri-o://704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5" gracePeriod=2 Jan 26 15:55:04 crc kubenswrapper[4823]: I0126 15:55:04.838064 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:55:04 crc kubenswrapper[4823]: I0126 15:55:04.964856 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd8jq\" (UniqueName: \"kubernetes.io/projected/c332bf84-2745-4603-a3ab-ee64e0641725-kube-api-access-zd8jq\") pod \"c332bf84-2745-4603-a3ab-ee64e0641725\" (UID: \"c332bf84-2745-4603-a3ab-ee64e0641725\") " Jan 26 15:55:04 crc kubenswrapper[4823]: I0126 15:55:04.964931 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c332bf84-2745-4603-a3ab-ee64e0641725-utilities\") pod \"c332bf84-2745-4603-a3ab-ee64e0641725\" (UID: \"c332bf84-2745-4603-a3ab-ee64e0641725\") " Jan 26 15:55:04 crc kubenswrapper[4823]: I0126 15:55:04.965298 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c332bf84-2745-4603-a3ab-ee64e0641725-catalog-content\") pod \"c332bf84-2745-4603-a3ab-ee64e0641725\" (UID: \"c332bf84-2745-4603-a3ab-ee64e0641725\") " Jan 26 15:55:04 crc kubenswrapper[4823]: I0126 15:55:04.965930 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c332bf84-2745-4603-a3ab-ee64e0641725-utilities" (OuterVolumeSpecName: "utilities") pod "c332bf84-2745-4603-a3ab-ee64e0641725" (UID: "c332bf84-2745-4603-a3ab-ee64e0641725"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:55:04 crc kubenswrapper[4823]: I0126 15:55:04.979487 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c332bf84-2745-4603-a3ab-ee64e0641725-kube-api-access-zd8jq" (OuterVolumeSpecName: "kube-api-access-zd8jq") pod "c332bf84-2745-4603-a3ab-ee64e0641725" (UID: "c332bf84-2745-4603-a3ab-ee64e0641725"). InnerVolumeSpecName "kube-api-access-zd8jq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.018899 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c332bf84-2745-4603-a3ab-ee64e0641725-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c332bf84-2745-4603-a3ab-ee64e0641725" (UID: "c332bf84-2745-4603-a3ab-ee64e0641725"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.069851 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c332bf84-2745-4603-a3ab-ee64e0641725-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.069936 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd8jq\" (UniqueName: \"kubernetes.io/projected/c332bf84-2745-4603-a3ab-ee64e0641725-kube-api-access-zd8jq\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.069952 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c332bf84-2745-4603-a3ab-ee64e0641725-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.189862 4823 generic.go:334] "Generic (PLEG): container finished" podID="c332bf84-2745-4603-a3ab-ee64e0641725" containerID="704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5" exitCode=0 Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.189911 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sgws" event={"ID":"c332bf84-2745-4603-a3ab-ee64e0641725","Type":"ContainerDied","Data":"704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5"} Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.189942 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9sgws" event={"ID":"c332bf84-2745-4603-a3ab-ee64e0641725","Type":"ContainerDied","Data":"e7a67ad6608e77129ad12cc41d0fc950852d8de39c51d93f98aec67cf86afeaf"} Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.189937 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9sgws" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.189959 4823 scope.go:117] "RemoveContainer" containerID="704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.211060 4823 scope.go:117] "RemoveContainer" containerID="6216de8db2677ccf9726b098954cee08dab70497f3761a0e992c23098e92a492" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.237613 4823 scope.go:117] "RemoveContainer" containerID="33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.245207 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9sgws"] Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.268096 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9sgws"] Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.273119 4823 scope.go:117] "RemoveContainer" containerID="704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5" Jan 26 15:55:05 crc kubenswrapper[4823]: E0126 15:55:05.273617 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5\": container with ID starting with 704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5 not found: ID does not exist" containerID="704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.273669 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5"} err="failed to get container status \"704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5\": rpc error: code = NotFound desc = could not find container \"704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5\": container with ID starting with 704da1ba0ecd373a6860918b3fac4eea41fef7ffc0e3f70d81075fa3d3b0c1e5 not found: ID does not exist" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.273695 4823 scope.go:117] "RemoveContainer" containerID="6216de8db2677ccf9726b098954cee08dab70497f3761a0e992c23098e92a492" Jan 26 15:55:05 crc kubenswrapper[4823]: E0126 15:55:05.274141 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6216de8db2677ccf9726b098954cee08dab70497f3761a0e992c23098e92a492\": container with ID starting with 6216de8db2677ccf9726b098954cee08dab70497f3761a0e992c23098e92a492 not found: ID does not exist" containerID="6216de8db2677ccf9726b098954cee08dab70497f3761a0e992c23098e92a492" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.274176 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6216de8db2677ccf9726b098954cee08dab70497f3761a0e992c23098e92a492"} err="failed to get container status \"6216de8db2677ccf9726b098954cee08dab70497f3761a0e992c23098e92a492\": rpc error: code = NotFound desc = could not find container \"6216de8db2677ccf9726b098954cee08dab70497f3761a0e992c23098e92a492\": container with ID starting with 6216de8db2677ccf9726b098954cee08dab70497f3761a0e992c23098e92a492 not found: ID does not exist" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.274201 4823 scope.go:117] "RemoveContainer" containerID="33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df" Jan 26 15:55:05 crc kubenswrapper[4823]: E0126 15:55:05.274432 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df\": container with ID starting with 33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df not found: ID does not exist" containerID="33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.274481 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df"} err="failed to get container status \"33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df\": rpc error: code = NotFound desc = could not find container \"33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df\": container with ID starting with 33f19a47cb4ebb0b34a2066795313fe6cbc1bd0ad3bd35a0f4923b8952e477df not found: ID does not exist" Jan 26 15:55:05 crc kubenswrapper[4823]: I0126 15:55:05.572448 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c332bf84-2745-4603-a3ab-ee64e0641725" path="/var/lib/kubelet/pods/c332bf84-2745-4603-a3ab-ee64e0641725/volumes" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.530125 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m45xs"] Jan 26 15:55:28 crc kubenswrapper[4823]: E0126 15:55:28.531191 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c332bf84-2745-4603-a3ab-ee64e0641725" containerName="registry-server" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.531208 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c332bf84-2745-4603-a3ab-ee64e0641725" containerName="registry-server" Jan 26 15:55:28 crc kubenswrapper[4823]: E0126 15:55:28.531244 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c332bf84-2745-4603-a3ab-ee64e0641725" containerName="extract-content" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.531252 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c332bf84-2745-4603-a3ab-ee64e0641725" containerName="extract-content" Jan 26 15:55:28 crc kubenswrapper[4823]: E0126 15:55:28.531262 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c332bf84-2745-4603-a3ab-ee64e0641725" containerName="extract-utilities" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.531270 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c332bf84-2745-4603-a3ab-ee64e0641725" containerName="extract-utilities" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.532960 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c332bf84-2745-4603-a3ab-ee64e0641725" containerName="registry-server" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.558059 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.562128 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m45xs"] Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.594394 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668fcee3-13f8-4e93-aaeb-4b0473a2471c-catalog-content\") pod \"redhat-marketplace-m45xs\" (UID: \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\") " pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.594467 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndzcv\" (UniqueName: \"kubernetes.io/projected/668fcee3-13f8-4e93-aaeb-4b0473a2471c-kube-api-access-ndzcv\") pod \"redhat-marketplace-m45xs\" (UID: \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\") " pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.594549 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668fcee3-13f8-4e93-aaeb-4b0473a2471c-utilities\") pod \"redhat-marketplace-m45xs\" (UID: \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\") " pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.696331 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668fcee3-13f8-4e93-aaeb-4b0473a2471c-catalog-content\") pod \"redhat-marketplace-m45xs\" (UID: \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\") " pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.696715 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndzcv\" (UniqueName: \"kubernetes.io/projected/668fcee3-13f8-4e93-aaeb-4b0473a2471c-kube-api-access-ndzcv\") pod \"redhat-marketplace-m45xs\" (UID: \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\") " pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.696879 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668fcee3-13f8-4e93-aaeb-4b0473a2471c-utilities\") pod \"redhat-marketplace-m45xs\" (UID: \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\") " pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.698017 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668fcee3-13f8-4e93-aaeb-4b0473a2471c-utilities\") pod \"redhat-marketplace-m45xs\" (UID: \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\") " pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.698248 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668fcee3-13f8-4e93-aaeb-4b0473a2471c-catalog-content\") pod \"redhat-marketplace-m45xs\" (UID: \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\") " pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.721417 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndzcv\" (UniqueName: \"kubernetes.io/projected/668fcee3-13f8-4e93-aaeb-4b0473a2471c-kube-api-access-ndzcv\") pod \"redhat-marketplace-m45xs\" (UID: \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\") " pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:28 crc kubenswrapper[4823]: I0126 15:55:28.883909 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:29 crc kubenswrapper[4823]: I0126 15:55:29.383523 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m45xs"] Jan 26 15:55:29 crc kubenswrapper[4823]: I0126 15:55:29.420126 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m45xs" event={"ID":"668fcee3-13f8-4e93-aaeb-4b0473a2471c","Type":"ContainerStarted","Data":"b508b2502e626caa1b593162cf130d9e3e79c7c803afdcecd280bb93b500b85b"} Jan 26 15:55:30 crc kubenswrapper[4823]: I0126 15:55:30.428729 4823 generic.go:334] "Generic (PLEG): container finished" podID="668fcee3-13f8-4e93-aaeb-4b0473a2471c" containerID="d8c822626bbdcbf0cc9daddee044392e8d7bdf116484ae99378dc59826a97fe6" exitCode=0 Jan 26 15:55:30 crc kubenswrapper[4823]: I0126 15:55:30.429153 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m45xs" event={"ID":"668fcee3-13f8-4e93-aaeb-4b0473a2471c","Type":"ContainerDied","Data":"d8c822626bbdcbf0cc9daddee044392e8d7bdf116484ae99378dc59826a97fe6"} Jan 26 15:55:32 crc kubenswrapper[4823]: I0126 15:55:32.447269 4823 generic.go:334] "Generic (PLEG): container finished" podID="668fcee3-13f8-4e93-aaeb-4b0473a2471c" containerID="5750c8791d637eb578f5c059bfbc9032d03722a1abe6b997278454f9903813ea" exitCode=0 Jan 26 15:55:32 crc kubenswrapper[4823]: I0126 15:55:32.447402 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m45xs" event={"ID":"668fcee3-13f8-4e93-aaeb-4b0473a2471c","Type":"ContainerDied","Data":"5750c8791d637eb578f5c059bfbc9032d03722a1abe6b997278454f9903813ea"} Jan 26 15:55:33 crc kubenswrapper[4823]: I0126 15:55:33.470064 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m45xs" event={"ID":"668fcee3-13f8-4e93-aaeb-4b0473a2471c","Type":"ContainerStarted","Data":"4ce823b79071dc32a6cf1a63e8cb15088a2a0d84c0f1697a3eca5e41aad80721"} Jan 26 15:55:33 crc kubenswrapper[4823]: I0126 15:55:33.504754 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m45xs" podStartSLOduration=3.110303573 podStartE2EDuration="5.504735119s" podCreationTimestamp="2026-01-26 15:55:28 +0000 UTC" firstStartedPulling="2026-01-26 15:55:30.430967654 +0000 UTC m=+4127.116430759" lastFinishedPulling="2026-01-26 15:55:32.8253992 +0000 UTC m=+4129.510862305" observedRunningTime="2026-01-26 15:55:33.504025969 +0000 UTC m=+4130.189489074" watchObservedRunningTime="2026-01-26 15:55:33.504735119 +0000 UTC m=+4130.190198224" Jan 26 15:55:38 crc kubenswrapper[4823]: I0126 15:55:38.884606 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:38 crc kubenswrapper[4823]: I0126 15:55:38.885135 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:38 crc kubenswrapper[4823]: I0126 15:55:38.933646 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:39 crc kubenswrapper[4823]: I0126 15:55:39.572864 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:39 crc kubenswrapper[4823]: I0126 15:55:39.658925 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m45xs"] Jan 26 15:55:41 crc kubenswrapper[4823]: I0126 15:55:41.532305 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m45xs" podUID="668fcee3-13f8-4e93-aaeb-4b0473a2471c" containerName="registry-server" containerID="cri-o://4ce823b79071dc32a6cf1a63e8cb15088a2a0d84c0f1697a3eca5e41aad80721" gracePeriod=2 Jan 26 15:55:42 crc kubenswrapper[4823]: I0126 15:55:42.544922 4823 generic.go:334] "Generic (PLEG): container finished" podID="668fcee3-13f8-4e93-aaeb-4b0473a2471c" containerID="4ce823b79071dc32a6cf1a63e8cb15088a2a0d84c0f1697a3eca5e41aad80721" exitCode=0 Jan 26 15:55:42 crc kubenswrapper[4823]: I0126 15:55:42.544964 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m45xs" event={"ID":"668fcee3-13f8-4e93-aaeb-4b0473a2471c","Type":"ContainerDied","Data":"4ce823b79071dc32a6cf1a63e8cb15088a2a0d84c0f1697a3eca5e41aad80721"} Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.038800 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.136022 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668fcee3-13f8-4e93-aaeb-4b0473a2471c-utilities\") pod \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\" (UID: \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\") " Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.136186 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668fcee3-13f8-4e93-aaeb-4b0473a2471c-catalog-content\") pod \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\" (UID: \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\") " Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.136243 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndzcv\" (UniqueName: \"kubernetes.io/projected/668fcee3-13f8-4e93-aaeb-4b0473a2471c-kube-api-access-ndzcv\") pod \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\" (UID: \"668fcee3-13f8-4e93-aaeb-4b0473a2471c\") " Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.137096 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/668fcee3-13f8-4e93-aaeb-4b0473a2471c-utilities" (OuterVolumeSpecName: "utilities") pod "668fcee3-13f8-4e93-aaeb-4b0473a2471c" (UID: "668fcee3-13f8-4e93-aaeb-4b0473a2471c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.137473 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/668fcee3-13f8-4e93-aaeb-4b0473a2471c-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.142580 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668fcee3-13f8-4e93-aaeb-4b0473a2471c-kube-api-access-ndzcv" (OuterVolumeSpecName: "kube-api-access-ndzcv") pod "668fcee3-13f8-4e93-aaeb-4b0473a2471c" (UID: "668fcee3-13f8-4e93-aaeb-4b0473a2471c"). InnerVolumeSpecName "kube-api-access-ndzcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.169072 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/668fcee3-13f8-4e93-aaeb-4b0473a2471c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "668fcee3-13f8-4e93-aaeb-4b0473a2471c" (UID: "668fcee3-13f8-4e93-aaeb-4b0473a2471c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.239101 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/668fcee3-13f8-4e93-aaeb-4b0473a2471c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.239151 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndzcv\" (UniqueName: \"kubernetes.io/projected/668fcee3-13f8-4e93-aaeb-4b0473a2471c-kube-api-access-ndzcv\") on node \"crc\" DevicePath \"\"" Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.556340 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m45xs" event={"ID":"668fcee3-13f8-4e93-aaeb-4b0473a2471c","Type":"ContainerDied","Data":"b508b2502e626caa1b593162cf130d9e3e79c7c803afdcecd280bb93b500b85b"} Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.556416 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m45xs" Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.556633 4823 scope.go:117] "RemoveContainer" containerID="4ce823b79071dc32a6cf1a63e8cb15088a2a0d84c0f1697a3eca5e41aad80721" Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.581289 4823 scope.go:117] "RemoveContainer" containerID="5750c8791d637eb578f5c059bfbc9032d03722a1abe6b997278454f9903813ea" Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.613797 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m45xs"] Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.633734 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m45xs"] Jan 26 15:55:43 crc kubenswrapper[4823]: I0126 15:55:43.635563 4823 scope.go:117] "RemoveContainer" containerID="d8c822626bbdcbf0cc9daddee044392e8d7bdf116484ae99378dc59826a97fe6" Jan 26 15:55:44 crc kubenswrapper[4823]: E0126 15:55:44.262393 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668fcee3_13f8_4e93_aaeb_4b0473a2471c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668fcee3_13f8_4e93_aaeb_4b0473a2471c.slice/crio-b508b2502e626caa1b593162cf130d9e3e79c7c803afdcecd280bb93b500b85b\": RecentStats: unable to find data in memory cache]" Jan 26 15:55:45 crc kubenswrapper[4823]: I0126 15:55:45.571100 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668fcee3-13f8-4e93-aaeb-4b0473a2471c" path="/var/lib/kubelet/pods/668fcee3-13f8-4e93-aaeb-4b0473a2471c/volumes" Jan 26 15:55:54 crc kubenswrapper[4823]: E0126 15:55:54.509675 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668fcee3_13f8_4e93_aaeb_4b0473a2471c.slice/crio-b508b2502e626caa1b593162cf130d9e3e79c7c803afdcecd280bb93b500b85b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668fcee3_13f8_4e93_aaeb_4b0473a2471c.slice\": RecentStats: unable to find data in memory cache]" Jan 26 15:56:04 crc kubenswrapper[4823]: E0126 15:56:04.741131 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668fcee3_13f8_4e93_aaeb_4b0473a2471c.slice/crio-b508b2502e626caa1b593162cf130d9e3e79c7c803afdcecd280bb93b500b85b\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668fcee3_13f8_4e93_aaeb_4b0473a2471c.slice\": RecentStats: unable to find data in memory cache]" Jan 26 15:56:14 crc kubenswrapper[4823]: E0126 15:56:14.992390 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668fcee3_13f8_4e93_aaeb_4b0473a2471c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668fcee3_13f8_4e93_aaeb_4b0473a2471c.slice/crio-b508b2502e626caa1b593162cf130d9e3e79c7c803afdcecd280bb93b500b85b\": RecentStats: unable to find data in memory cache]" Jan 26 15:56:25 crc kubenswrapper[4823]: E0126 15:56:25.231203 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668fcee3_13f8_4e93_aaeb_4b0473a2471c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668fcee3_13f8_4e93_aaeb_4b0473a2471c.slice/crio-b508b2502e626caa1b593162cf130d9e3e79c7c803afdcecd280bb93b500b85b\": RecentStats: unable to find data in memory cache]" Jan 26 15:56:34 crc kubenswrapper[4823]: I0126 15:56:34.508655 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:56:34 crc kubenswrapper[4823]: I0126 15:56:34.509213 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:56:35 crc kubenswrapper[4823]: E0126 15:56:35.476074 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668fcee3_13f8_4e93_aaeb_4b0473a2471c.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod668fcee3_13f8_4e93_aaeb_4b0473a2471c.slice/crio-b508b2502e626caa1b593162cf130d9e3e79c7c803afdcecd280bb93b500b85b\": RecentStats: unable to find data in memory cache]" Jan 26 15:57:04 crc kubenswrapper[4823]: I0126 15:57:04.508544 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:57:04 crc kubenswrapper[4823]: I0126 15:57:04.509108 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:57:34 crc kubenswrapper[4823]: I0126 15:57:34.508781 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:57:34 crc kubenswrapper[4823]: I0126 15:57:34.509385 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 15:57:34 crc kubenswrapper[4823]: I0126 15:57:34.509444 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 15:57:34 crc kubenswrapper[4823]: I0126 15:57:34.510263 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b505329c074aeef28c22d08978adecb28c0d16d61263596e047e56f449cc0e8"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 15:57:34 crc kubenswrapper[4823]: I0126 15:57:34.510322 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://7b505329c074aeef28c22d08978adecb28c0d16d61263596e047e56f449cc0e8" gracePeriod=600 Jan 26 15:57:35 crc kubenswrapper[4823]: I0126 15:57:35.633588 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="7b505329c074aeef28c22d08978adecb28c0d16d61263596e047e56f449cc0e8" exitCode=0 Jan 26 15:57:35 crc kubenswrapper[4823]: I0126 15:57:35.633666 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"7b505329c074aeef28c22d08978adecb28c0d16d61263596e047e56f449cc0e8"} Jan 26 15:57:35 crc kubenswrapper[4823]: I0126 15:57:35.634590 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72"} Jan 26 15:57:35 crc kubenswrapper[4823]: I0126 15:57:35.634631 4823 scope.go:117] "RemoveContainer" containerID="7fd2d47842477cafbaff20823dac2a38ed6a33d2d8a8f9c5f4511adecb849b11" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.212027 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lrn9w"] Jan 26 15:57:37 crc kubenswrapper[4823]: E0126 15:57:37.212775 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668fcee3-13f8-4e93-aaeb-4b0473a2471c" containerName="extract-utilities" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.212792 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="668fcee3-13f8-4e93-aaeb-4b0473a2471c" containerName="extract-utilities" Jan 26 15:57:37 crc kubenswrapper[4823]: E0126 15:57:37.212827 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668fcee3-13f8-4e93-aaeb-4b0473a2471c" containerName="registry-server" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.212837 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="668fcee3-13f8-4e93-aaeb-4b0473a2471c" containerName="registry-server" Jan 26 15:57:37 crc kubenswrapper[4823]: E0126 15:57:37.212858 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="668fcee3-13f8-4e93-aaeb-4b0473a2471c" containerName="extract-content" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.212868 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="668fcee3-13f8-4e93-aaeb-4b0473a2471c" containerName="extract-content" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.213095 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="668fcee3-13f8-4e93-aaeb-4b0473a2471c" containerName="registry-server" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.214407 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.228629 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrn9w"] Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.333852 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj6hw\" (UniqueName: \"kubernetes.io/projected/853489ed-34a0-4a19-93ce-592ec0f111a5-kube-api-access-bj6hw\") pod \"certified-operators-lrn9w\" (UID: \"853489ed-34a0-4a19-93ce-592ec0f111a5\") " pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.333917 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853489ed-34a0-4a19-93ce-592ec0f111a5-utilities\") pod \"certified-operators-lrn9w\" (UID: \"853489ed-34a0-4a19-93ce-592ec0f111a5\") " pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.334010 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853489ed-34a0-4a19-93ce-592ec0f111a5-catalog-content\") pod \"certified-operators-lrn9w\" (UID: \"853489ed-34a0-4a19-93ce-592ec0f111a5\") " pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.435855 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj6hw\" (UniqueName: \"kubernetes.io/projected/853489ed-34a0-4a19-93ce-592ec0f111a5-kube-api-access-bj6hw\") pod \"certified-operators-lrn9w\" (UID: \"853489ed-34a0-4a19-93ce-592ec0f111a5\") " pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.435926 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853489ed-34a0-4a19-93ce-592ec0f111a5-utilities\") pod \"certified-operators-lrn9w\" (UID: \"853489ed-34a0-4a19-93ce-592ec0f111a5\") " pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.436030 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853489ed-34a0-4a19-93ce-592ec0f111a5-catalog-content\") pod \"certified-operators-lrn9w\" (UID: \"853489ed-34a0-4a19-93ce-592ec0f111a5\") " pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.436578 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853489ed-34a0-4a19-93ce-592ec0f111a5-catalog-content\") pod \"certified-operators-lrn9w\" (UID: \"853489ed-34a0-4a19-93ce-592ec0f111a5\") " pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.436593 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853489ed-34a0-4a19-93ce-592ec0f111a5-utilities\") pod \"certified-operators-lrn9w\" (UID: \"853489ed-34a0-4a19-93ce-592ec0f111a5\") " pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.457629 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj6hw\" (UniqueName: \"kubernetes.io/projected/853489ed-34a0-4a19-93ce-592ec0f111a5-kube-api-access-bj6hw\") pod \"certified-operators-lrn9w\" (UID: \"853489ed-34a0-4a19-93ce-592ec0f111a5\") " pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:37 crc kubenswrapper[4823]: I0126 15:57:37.557980 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:38 crc kubenswrapper[4823]: I0126 15:57:38.174740 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lrn9w"] Jan 26 15:57:38 crc kubenswrapper[4823]: I0126 15:57:38.677847 4823 generic.go:334] "Generic (PLEG): container finished" podID="853489ed-34a0-4a19-93ce-592ec0f111a5" containerID="176bfc77a9a4be641544b94a29345a28658452b51da0d18cde81d7ad231df33f" exitCode=0 Jan 26 15:57:38 crc kubenswrapper[4823]: I0126 15:57:38.677978 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrn9w" event={"ID":"853489ed-34a0-4a19-93ce-592ec0f111a5","Type":"ContainerDied","Data":"176bfc77a9a4be641544b94a29345a28658452b51da0d18cde81d7ad231df33f"} Jan 26 15:57:38 crc kubenswrapper[4823]: I0126 15:57:38.678161 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrn9w" event={"ID":"853489ed-34a0-4a19-93ce-592ec0f111a5","Type":"ContainerStarted","Data":"87ddd28fb2b9f70c2e5d20f1f271b7910c1de2ae2e85c94184bf3f68de4103fa"} Jan 26 15:57:39 crc kubenswrapper[4823]: I0126 15:57:39.687947 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrn9w" event={"ID":"853489ed-34a0-4a19-93ce-592ec0f111a5","Type":"ContainerStarted","Data":"bf76cfa7f45ee9bd431131ed4b6087a19058a89e0256ea2378b5a1bb6cba888d"} Jan 26 15:57:40 crc kubenswrapper[4823]: I0126 15:57:40.701747 4823 generic.go:334] "Generic (PLEG): container finished" podID="853489ed-34a0-4a19-93ce-592ec0f111a5" containerID="bf76cfa7f45ee9bd431131ed4b6087a19058a89e0256ea2378b5a1bb6cba888d" exitCode=0 Jan 26 15:57:40 crc kubenswrapper[4823]: I0126 15:57:40.701811 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrn9w" event={"ID":"853489ed-34a0-4a19-93ce-592ec0f111a5","Type":"ContainerDied","Data":"bf76cfa7f45ee9bd431131ed4b6087a19058a89e0256ea2378b5a1bb6cba888d"} Jan 26 15:57:41 crc kubenswrapper[4823]: I0126 15:57:41.712550 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrn9w" event={"ID":"853489ed-34a0-4a19-93ce-592ec0f111a5","Type":"ContainerStarted","Data":"5d69e043f5489460b5c1d1452769e6f95ff5f2d1e6323e4bef8ed545af7403b1"} Jan 26 15:57:41 crc kubenswrapper[4823]: I0126 15:57:41.736025 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lrn9w" podStartSLOduration=2.212699596 podStartE2EDuration="4.736006485s" podCreationTimestamp="2026-01-26 15:57:37 +0000 UTC" firstStartedPulling="2026-01-26 15:57:38.679615394 +0000 UTC m=+4255.365078499" lastFinishedPulling="2026-01-26 15:57:41.202922273 +0000 UTC m=+4257.888385388" observedRunningTime="2026-01-26 15:57:41.73359962 +0000 UTC m=+4258.419062735" watchObservedRunningTime="2026-01-26 15:57:41.736006485 +0000 UTC m=+4258.421469590" Jan 26 15:57:47 crc kubenswrapper[4823]: I0126 15:57:47.558868 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:47 crc kubenswrapper[4823]: I0126 15:57:47.560526 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:47 crc kubenswrapper[4823]: I0126 15:57:47.606615 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:47 crc kubenswrapper[4823]: I0126 15:57:47.809959 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:47 crc kubenswrapper[4823]: I0126 15:57:47.859405 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrn9w"] Jan 26 15:57:49 crc kubenswrapper[4823]: I0126 15:57:49.777816 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lrn9w" podUID="853489ed-34a0-4a19-93ce-592ec0f111a5" containerName="registry-server" containerID="cri-o://5d69e043f5489460b5c1d1452769e6f95ff5f2d1e6323e4bef8ed545af7403b1" gracePeriod=2 Jan 26 15:57:50 crc kubenswrapper[4823]: I0126 15:57:50.790313 4823 generic.go:334] "Generic (PLEG): container finished" podID="853489ed-34a0-4a19-93ce-592ec0f111a5" containerID="5d69e043f5489460b5c1d1452769e6f95ff5f2d1e6323e4bef8ed545af7403b1" exitCode=0 Jan 26 15:57:50 crc kubenswrapper[4823]: I0126 15:57:50.790543 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrn9w" event={"ID":"853489ed-34a0-4a19-93ce-592ec0f111a5","Type":"ContainerDied","Data":"5d69e043f5489460b5c1d1452769e6f95ff5f2d1e6323e4bef8ed545af7403b1"} Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.184052 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.293833 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853489ed-34a0-4a19-93ce-592ec0f111a5-catalog-content\") pod \"853489ed-34a0-4a19-93ce-592ec0f111a5\" (UID: \"853489ed-34a0-4a19-93ce-592ec0f111a5\") " Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.293941 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj6hw\" (UniqueName: \"kubernetes.io/projected/853489ed-34a0-4a19-93ce-592ec0f111a5-kube-api-access-bj6hw\") pod \"853489ed-34a0-4a19-93ce-592ec0f111a5\" (UID: \"853489ed-34a0-4a19-93ce-592ec0f111a5\") " Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.294031 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853489ed-34a0-4a19-93ce-592ec0f111a5-utilities\") pod \"853489ed-34a0-4a19-93ce-592ec0f111a5\" (UID: \"853489ed-34a0-4a19-93ce-592ec0f111a5\") " Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.295521 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/853489ed-34a0-4a19-93ce-592ec0f111a5-utilities" (OuterVolumeSpecName: "utilities") pod "853489ed-34a0-4a19-93ce-592ec0f111a5" (UID: "853489ed-34a0-4a19-93ce-592ec0f111a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.314234 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853489ed-34a0-4a19-93ce-592ec0f111a5-kube-api-access-bj6hw" (OuterVolumeSpecName: "kube-api-access-bj6hw") pod "853489ed-34a0-4a19-93ce-592ec0f111a5" (UID: "853489ed-34a0-4a19-93ce-592ec0f111a5"). InnerVolumeSpecName "kube-api-access-bj6hw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.342710 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/853489ed-34a0-4a19-93ce-592ec0f111a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "853489ed-34a0-4a19-93ce-592ec0f111a5" (UID: "853489ed-34a0-4a19-93ce-592ec0f111a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.396739 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853489ed-34a0-4a19-93ce-592ec0f111a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.396792 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj6hw\" (UniqueName: \"kubernetes.io/projected/853489ed-34a0-4a19-93ce-592ec0f111a5-kube-api-access-bj6hw\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.396804 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853489ed-34a0-4a19-93ce-592ec0f111a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.801399 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lrn9w" event={"ID":"853489ed-34a0-4a19-93ce-592ec0f111a5","Type":"ContainerDied","Data":"87ddd28fb2b9f70c2e5d20f1f271b7910c1de2ae2e85c94184bf3f68de4103fa"} Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.801747 4823 scope.go:117] "RemoveContainer" containerID="5d69e043f5489460b5c1d1452769e6f95ff5f2d1e6323e4bef8ed545af7403b1" Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.801905 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lrn9w" Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.828142 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lrn9w"] Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.841618 4823 scope.go:117] "RemoveContainer" containerID="bf76cfa7f45ee9bd431131ed4b6087a19058a89e0256ea2378b5a1bb6cba888d" Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.861639 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lrn9w"] Jan 26 15:57:51 crc kubenswrapper[4823]: I0126 15:57:51.879583 4823 scope.go:117] "RemoveContainer" containerID="176bfc77a9a4be641544b94a29345a28658452b51da0d18cde81d7ad231df33f" Jan 26 15:57:53 crc kubenswrapper[4823]: I0126 15:57:53.572513 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853489ed-34a0-4a19-93ce-592ec0f111a5" path="/var/lib/kubelet/pods/853489ed-34a0-4a19-93ce-592ec0f111a5/volumes" Jan 26 15:59:34 crc kubenswrapper[4823]: I0126 15:59:34.508578 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 15:59:34 crc kubenswrapper[4823]: I0126 15:59:34.509579 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.180520 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs"] Jan 26 16:00:00 crc kubenswrapper[4823]: E0126 16:00:00.181508 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853489ed-34a0-4a19-93ce-592ec0f111a5" containerName="extract-content" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.181525 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="853489ed-34a0-4a19-93ce-592ec0f111a5" containerName="extract-content" Jan 26 16:00:00 crc kubenswrapper[4823]: E0126 16:00:00.181548 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853489ed-34a0-4a19-93ce-592ec0f111a5" containerName="extract-utilities" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.181554 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="853489ed-34a0-4a19-93ce-592ec0f111a5" containerName="extract-utilities" Jan 26 16:00:00 crc kubenswrapper[4823]: E0126 16:00:00.181570 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="853489ed-34a0-4a19-93ce-592ec0f111a5" containerName="registry-server" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.181576 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="853489ed-34a0-4a19-93ce-592ec0f111a5" containerName="registry-server" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.181782 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="853489ed-34a0-4a19-93ce-592ec0f111a5" containerName="registry-server" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.182503 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.184959 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.185473 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.195225 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs"] Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.307930 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49jwz\" (UniqueName: \"kubernetes.io/projected/40c127e7-9656-4045-99f6-4c6403877cbb-kube-api-access-49jwz\") pod \"collect-profiles-29490720-f59hs\" (UID: \"40c127e7-9656-4045-99f6-4c6403877cbb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.308150 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40c127e7-9656-4045-99f6-4c6403877cbb-config-volume\") pod \"collect-profiles-29490720-f59hs\" (UID: \"40c127e7-9656-4045-99f6-4c6403877cbb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.308198 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40c127e7-9656-4045-99f6-4c6403877cbb-secret-volume\") pod \"collect-profiles-29490720-f59hs\" (UID: \"40c127e7-9656-4045-99f6-4c6403877cbb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.410471 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40c127e7-9656-4045-99f6-4c6403877cbb-config-volume\") pod \"collect-profiles-29490720-f59hs\" (UID: \"40c127e7-9656-4045-99f6-4c6403877cbb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.410801 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40c127e7-9656-4045-99f6-4c6403877cbb-secret-volume\") pod \"collect-profiles-29490720-f59hs\" (UID: \"40c127e7-9656-4045-99f6-4c6403877cbb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.411011 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49jwz\" (UniqueName: \"kubernetes.io/projected/40c127e7-9656-4045-99f6-4c6403877cbb-kube-api-access-49jwz\") pod \"collect-profiles-29490720-f59hs\" (UID: \"40c127e7-9656-4045-99f6-4c6403877cbb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.411908 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40c127e7-9656-4045-99f6-4c6403877cbb-config-volume\") pod \"collect-profiles-29490720-f59hs\" (UID: \"40c127e7-9656-4045-99f6-4c6403877cbb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.418680 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40c127e7-9656-4045-99f6-4c6403877cbb-secret-volume\") pod \"collect-profiles-29490720-f59hs\" (UID: \"40c127e7-9656-4045-99f6-4c6403877cbb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.430216 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49jwz\" (UniqueName: \"kubernetes.io/projected/40c127e7-9656-4045-99f6-4c6403877cbb-kube-api-access-49jwz\") pod \"collect-profiles-29490720-f59hs\" (UID: \"40c127e7-9656-4045-99f6-4c6403877cbb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.502526 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:00 crc kubenswrapper[4823]: I0126 16:00:00.963183 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs"] Jan 26 16:00:01 crc kubenswrapper[4823]: I0126 16:00:01.915573 4823 generic.go:334] "Generic (PLEG): container finished" podID="40c127e7-9656-4045-99f6-4c6403877cbb" containerID="46f690ecc5480426be61e431c17d89bd1043d825249aa66167b9bca91c63b708" exitCode=0 Jan 26 16:00:01 crc kubenswrapper[4823]: I0126 16:00:01.915634 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" event={"ID":"40c127e7-9656-4045-99f6-4c6403877cbb","Type":"ContainerDied","Data":"46f690ecc5480426be61e431c17d89bd1043d825249aa66167b9bca91c63b708"} Jan 26 16:00:01 crc kubenswrapper[4823]: I0126 16:00:01.915959 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" event={"ID":"40c127e7-9656-4045-99f6-4c6403877cbb","Type":"ContainerStarted","Data":"f1b8c4a7010870ce3afc689251f6a5afa338b4753aa9c9fa6bc2ae94b95778e1"} Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.530464 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.680031 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40c127e7-9656-4045-99f6-4c6403877cbb-config-volume\") pod \"40c127e7-9656-4045-99f6-4c6403877cbb\" (UID: \"40c127e7-9656-4045-99f6-4c6403877cbb\") " Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.680090 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49jwz\" (UniqueName: \"kubernetes.io/projected/40c127e7-9656-4045-99f6-4c6403877cbb-kube-api-access-49jwz\") pod \"40c127e7-9656-4045-99f6-4c6403877cbb\" (UID: \"40c127e7-9656-4045-99f6-4c6403877cbb\") " Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.680186 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40c127e7-9656-4045-99f6-4c6403877cbb-secret-volume\") pod \"40c127e7-9656-4045-99f6-4c6403877cbb\" (UID: \"40c127e7-9656-4045-99f6-4c6403877cbb\") " Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.682034 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40c127e7-9656-4045-99f6-4c6403877cbb-config-volume" (OuterVolumeSpecName: "config-volume") pod "40c127e7-9656-4045-99f6-4c6403877cbb" (UID: "40c127e7-9656-4045-99f6-4c6403877cbb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.688185 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c127e7-9656-4045-99f6-4c6403877cbb-kube-api-access-49jwz" (OuterVolumeSpecName: "kube-api-access-49jwz") pod "40c127e7-9656-4045-99f6-4c6403877cbb" (UID: "40c127e7-9656-4045-99f6-4c6403877cbb"). InnerVolumeSpecName "kube-api-access-49jwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.688588 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40c127e7-9656-4045-99f6-4c6403877cbb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "40c127e7-9656-4045-99f6-4c6403877cbb" (UID: "40c127e7-9656-4045-99f6-4c6403877cbb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.783645 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40c127e7-9656-4045-99f6-4c6403877cbb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.783686 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49jwz\" (UniqueName: \"kubernetes.io/projected/40c127e7-9656-4045-99f6-4c6403877cbb-kube-api-access-49jwz\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.783696 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40c127e7-9656-4045-99f6-4c6403877cbb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.933410 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" event={"ID":"40c127e7-9656-4045-99f6-4c6403877cbb","Type":"ContainerDied","Data":"f1b8c4a7010870ce3afc689251f6a5afa338b4753aa9c9fa6bc2ae94b95778e1"} Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.933498 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1b8c4a7010870ce3afc689251f6a5afa338b4753aa9c9fa6bc2ae94b95778e1" Jan 26 16:00:03 crc kubenswrapper[4823]: I0126 16:00:03.933449 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs" Jan 26 16:00:04 crc kubenswrapper[4823]: I0126 16:00:04.508057 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:00:04 crc kubenswrapper[4823]: I0126 16:00:04.508388 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:00:04 crc kubenswrapper[4823]: I0126 16:00:04.617191 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb"] Jan 26 16:00:04 crc kubenswrapper[4823]: I0126 16:00:04.625415 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490675-75pbb"] Jan 26 16:00:05 crc kubenswrapper[4823]: I0126 16:00:05.572921 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="837cede5-7802-40a7-a31f-09df765035ac" path="/var/lib/kubelet/pods/837cede5-7802-40a7-a31f-09df765035ac/volumes" Jan 26 16:00:07 crc kubenswrapper[4823]: I0126 16:00:07.901853 4823 scope.go:117] "RemoveContainer" containerID="49a1a46767a1100f302a39c595025cc13cea955f88daa4174b783c4039320cdb" Jan 26 16:00:34 crc kubenswrapper[4823]: I0126 16:00:34.508476 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:00:34 crc kubenswrapper[4823]: I0126 16:00:34.509472 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:00:34 crc kubenswrapper[4823]: I0126 16:00:34.509556 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 16:00:34 crc kubenswrapper[4823]: I0126 16:00:34.511041 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:00:34 crc kubenswrapper[4823]: I0126 16:00:34.511134 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" gracePeriod=600 Jan 26 16:00:34 crc kubenswrapper[4823]: E0126 16:00:34.637186 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:00:35 crc kubenswrapper[4823]: I0126 16:00:35.196872 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" exitCode=0 Jan 26 16:00:35 crc kubenswrapper[4823]: I0126 16:00:35.196915 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72"} Jan 26 16:00:35 crc kubenswrapper[4823]: I0126 16:00:35.196950 4823 scope.go:117] "RemoveContainer" containerID="7b505329c074aeef28c22d08978adecb28c0d16d61263596e047e56f449cc0e8" Jan 26 16:00:35 crc kubenswrapper[4823]: I0126 16:00:35.197660 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:00:35 crc kubenswrapper[4823]: E0126 16:00:35.198089 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:00:49 crc kubenswrapper[4823]: I0126 16:00:49.561197 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:00:49 crc kubenswrapper[4823]: E0126 16:00:49.562442 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.155971 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490721-lbzct"] Jan 26 16:01:00 crc kubenswrapper[4823]: E0126 16:01:00.158164 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c127e7-9656-4045-99f6-4c6403877cbb" containerName="collect-profiles" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.158283 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c127e7-9656-4045-99f6-4c6403877cbb" containerName="collect-profiles" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.158650 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c127e7-9656-4045-99f6-4c6403877cbb" containerName="collect-profiles" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.162783 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.167702 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490721-lbzct"] Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.212510 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-config-data\") pod \"keystone-cron-29490721-lbzct\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.212572 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sccwr\" (UniqueName: \"kubernetes.io/projected/96419d80-48f5-4579-884b-ae8f81f43ff6-kube-api-access-sccwr\") pod \"keystone-cron-29490721-lbzct\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.212646 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-fernet-keys\") pod \"keystone-cron-29490721-lbzct\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.212743 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-combined-ca-bundle\") pod \"keystone-cron-29490721-lbzct\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.314232 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-config-data\") pod \"keystone-cron-29490721-lbzct\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.314615 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sccwr\" (UniqueName: \"kubernetes.io/projected/96419d80-48f5-4579-884b-ae8f81f43ff6-kube-api-access-sccwr\") pod \"keystone-cron-29490721-lbzct\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.314688 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-fernet-keys\") pod \"keystone-cron-29490721-lbzct\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.314746 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-combined-ca-bundle\") pod \"keystone-cron-29490721-lbzct\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.321430 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-config-data\") pod \"keystone-cron-29490721-lbzct\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.321771 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-combined-ca-bundle\") pod \"keystone-cron-29490721-lbzct\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.333544 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-fernet-keys\") pod \"keystone-cron-29490721-lbzct\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.335045 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sccwr\" (UniqueName: \"kubernetes.io/projected/96419d80-48f5-4579-884b-ae8f81f43ff6-kube-api-access-sccwr\") pod \"keystone-cron-29490721-lbzct\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.492444 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.561789 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:01:00 crc kubenswrapper[4823]: E0126 16:01:00.562217 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:01:00 crc kubenswrapper[4823]: I0126 16:01:00.950899 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490721-lbzct"] Jan 26 16:01:01 crc kubenswrapper[4823]: I0126 16:01:01.414257 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-lbzct" event={"ID":"96419d80-48f5-4579-884b-ae8f81f43ff6","Type":"ContainerStarted","Data":"892d5066b80147f09ac7a0d09528490cf8cfe4ced7c3983fc51136fb9017c0a2"} Jan 26 16:01:01 crc kubenswrapper[4823]: I0126 16:01:01.414581 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-lbzct" event={"ID":"96419d80-48f5-4579-884b-ae8f81f43ff6","Type":"ContainerStarted","Data":"318b1d4cd9dbacf76423b917962445fbad2bf26aa0beb03007afc10b3a8e2095"} Jan 26 16:01:01 crc kubenswrapper[4823]: I0126 16:01:01.454901 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490721-lbzct" podStartSLOduration=1.4548826080000001 podStartE2EDuration="1.454882608s" podCreationTimestamp="2026-01-26 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:01:01.450300163 +0000 UTC m=+4458.135763298" watchObservedRunningTime="2026-01-26 16:01:01.454882608 +0000 UTC m=+4458.140345713" Jan 26 16:01:04 crc kubenswrapper[4823]: I0126 16:01:04.455124 4823 generic.go:334] "Generic (PLEG): container finished" podID="96419d80-48f5-4579-884b-ae8f81f43ff6" containerID="892d5066b80147f09ac7a0d09528490cf8cfe4ced7c3983fc51136fb9017c0a2" exitCode=0 Jan 26 16:01:04 crc kubenswrapper[4823]: I0126 16:01:04.455189 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-lbzct" event={"ID":"96419d80-48f5-4579-884b-ae8f81f43ff6","Type":"ContainerDied","Data":"892d5066b80147f09ac7a0d09528490cf8cfe4ced7c3983fc51136fb9017c0a2"} Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.055952 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.172999 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-config-data\") pod \"96419d80-48f5-4579-884b-ae8f81f43ff6\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.173093 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sccwr\" (UniqueName: \"kubernetes.io/projected/96419d80-48f5-4579-884b-ae8f81f43ff6-kube-api-access-sccwr\") pod \"96419d80-48f5-4579-884b-ae8f81f43ff6\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.173271 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-combined-ca-bundle\") pod \"96419d80-48f5-4579-884b-ae8f81f43ff6\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.173292 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-fernet-keys\") pod \"96419d80-48f5-4579-884b-ae8f81f43ff6\" (UID: \"96419d80-48f5-4579-884b-ae8f81f43ff6\") " Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.179718 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96419d80-48f5-4579-884b-ae8f81f43ff6-kube-api-access-sccwr" (OuterVolumeSpecName: "kube-api-access-sccwr") pod "96419d80-48f5-4579-884b-ae8f81f43ff6" (UID: "96419d80-48f5-4579-884b-ae8f81f43ff6"). InnerVolumeSpecName "kube-api-access-sccwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.182601 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "96419d80-48f5-4579-884b-ae8f81f43ff6" (UID: "96419d80-48f5-4579-884b-ae8f81f43ff6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.212121 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96419d80-48f5-4579-884b-ae8f81f43ff6" (UID: "96419d80-48f5-4579-884b-ae8f81f43ff6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.237002 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-config-data" (OuterVolumeSpecName: "config-data") pod "96419d80-48f5-4579-884b-ae8f81f43ff6" (UID: "96419d80-48f5-4579-884b-ae8f81f43ff6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.291535 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.291584 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sccwr\" (UniqueName: \"kubernetes.io/projected/96419d80-48f5-4579-884b-ae8f81f43ff6-kube-api-access-sccwr\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.291604 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.291622 4823 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/96419d80-48f5-4579-884b-ae8f81f43ff6-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.471971 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490721-lbzct" event={"ID":"96419d80-48f5-4579-884b-ae8f81f43ff6","Type":"ContainerDied","Data":"318b1d4cd9dbacf76423b917962445fbad2bf26aa0beb03007afc10b3a8e2095"} Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.472012 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="318b1d4cd9dbacf76423b917962445fbad2bf26aa0beb03007afc10b3a8e2095" Jan 26 16:01:06 crc kubenswrapper[4823]: I0126 16:01:06.472049 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490721-lbzct" Jan 26 16:01:13 crc kubenswrapper[4823]: I0126 16:01:13.571799 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:01:13 crc kubenswrapper[4823]: E0126 16:01:13.572871 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:01:25 crc kubenswrapper[4823]: I0126 16:01:25.560780 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:01:25 crc kubenswrapper[4823]: E0126 16:01:25.561466 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:01:36 crc kubenswrapper[4823]: I0126 16:01:36.560803 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:01:36 crc kubenswrapper[4823]: E0126 16:01:36.561653 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:01:47 crc kubenswrapper[4823]: I0126 16:01:47.561346 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:01:47 crc kubenswrapper[4823]: E0126 16:01:47.562192 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:01:59 crc kubenswrapper[4823]: I0126 16:01:59.564000 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:01:59 crc kubenswrapper[4823]: E0126 16:01:59.564689 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:02:10 crc kubenswrapper[4823]: I0126 16:02:10.560341 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:02:10 crc kubenswrapper[4823]: E0126 16:02:10.561130 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:02:22 crc kubenswrapper[4823]: I0126 16:02:22.798835 4823 patch_prober.go:28] interesting pod/oauth-openshift-7bccf64dbb-q4pfl container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:02:22 crc kubenswrapper[4823]: I0126 16:02:22.822457 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-7bccf64dbb-q4pfl" podUID="326827d0-4111-4c4b-88f2-47ba5553a488" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:02:23 crc kubenswrapper[4823]: I0126 16:02:23.566317 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:02:23 crc kubenswrapper[4823]: E0126 16:02:23.566859 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:02:35 crc kubenswrapper[4823]: I0126 16:02:35.564492 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:02:35 crc kubenswrapper[4823]: E0126 16:02:35.566517 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:02:50 crc kubenswrapper[4823]: I0126 16:02:50.560609 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:02:50 crc kubenswrapper[4823]: E0126 16:02:50.561468 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:03:01 crc kubenswrapper[4823]: I0126 16:03:01.561430 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:03:01 crc kubenswrapper[4823]: E0126 16:03:01.563441 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:03:12 crc kubenswrapper[4823]: I0126 16:03:12.560286 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:03:12 crc kubenswrapper[4823]: E0126 16:03:12.561080 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:03:27 crc kubenswrapper[4823]: I0126 16:03:27.560836 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:03:27 crc kubenswrapper[4823]: E0126 16:03:27.561759 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:03:42 crc kubenswrapper[4823]: I0126 16:03:42.561108 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:03:42 crc kubenswrapper[4823]: E0126 16:03:42.562385 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:03:54 crc kubenswrapper[4823]: I0126 16:03:54.560424 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:03:54 crc kubenswrapper[4823]: E0126 16:03:54.561162 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:04:06 crc kubenswrapper[4823]: I0126 16:04:06.560655 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:04:06 crc kubenswrapper[4823]: E0126 16:04:06.563210 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.719747 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zdt72"] Jan 26 16:04:13 crc kubenswrapper[4823]: E0126 16:04:13.721483 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96419d80-48f5-4579-884b-ae8f81f43ff6" containerName="keystone-cron" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.721523 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="96419d80-48f5-4579-884b-ae8f81f43ff6" containerName="keystone-cron" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.721791 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="96419d80-48f5-4579-884b-ae8f81f43ff6" containerName="keystone-cron" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.723647 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.732486 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdt72"] Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.868131 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcff6e54-9fbf-4859-92e4-cef4947df806-utilities\") pod \"redhat-operators-zdt72\" (UID: \"bcff6e54-9fbf-4859-92e4-cef4947df806\") " pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.868230 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fp4v\" (UniqueName: \"kubernetes.io/projected/bcff6e54-9fbf-4859-92e4-cef4947df806-kube-api-access-2fp4v\") pod \"redhat-operators-zdt72\" (UID: \"bcff6e54-9fbf-4859-92e4-cef4947df806\") " pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.868299 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcff6e54-9fbf-4859-92e4-cef4947df806-catalog-content\") pod \"redhat-operators-zdt72\" (UID: \"bcff6e54-9fbf-4859-92e4-cef4947df806\") " pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.970563 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcff6e54-9fbf-4859-92e4-cef4947df806-catalog-content\") pod \"redhat-operators-zdt72\" (UID: \"bcff6e54-9fbf-4859-92e4-cef4947df806\") " pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.970723 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcff6e54-9fbf-4859-92e4-cef4947df806-utilities\") pod \"redhat-operators-zdt72\" (UID: \"bcff6e54-9fbf-4859-92e4-cef4947df806\") " pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.970858 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fp4v\" (UniqueName: \"kubernetes.io/projected/bcff6e54-9fbf-4859-92e4-cef4947df806-kube-api-access-2fp4v\") pod \"redhat-operators-zdt72\" (UID: \"bcff6e54-9fbf-4859-92e4-cef4947df806\") " pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.971196 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcff6e54-9fbf-4859-92e4-cef4947df806-catalog-content\") pod \"redhat-operators-zdt72\" (UID: \"bcff6e54-9fbf-4859-92e4-cef4947df806\") " pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.971210 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcff6e54-9fbf-4859-92e4-cef4947df806-utilities\") pod \"redhat-operators-zdt72\" (UID: \"bcff6e54-9fbf-4859-92e4-cef4947df806\") " pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:13 crc kubenswrapper[4823]: I0126 16:04:13.992974 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fp4v\" (UniqueName: \"kubernetes.io/projected/bcff6e54-9fbf-4859-92e4-cef4947df806-kube-api-access-2fp4v\") pod \"redhat-operators-zdt72\" (UID: \"bcff6e54-9fbf-4859-92e4-cef4947df806\") " pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:14 crc kubenswrapper[4823]: I0126 16:04:14.049011 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:14 crc kubenswrapper[4823]: I0126 16:04:14.573303 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zdt72"] Jan 26 16:04:14 crc kubenswrapper[4823]: I0126 16:04:14.833469 4823 generic.go:334] "Generic (PLEG): container finished" podID="bcff6e54-9fbf-4859-92e4-cef4947df806" containerID="d817680e8d2c363e0043325d4cdb8b9e9cd3b767c6038a74be0f4135fdec2a9d" exitCode=0 Jan 26 16:04:14 crc kubenswrapper[4823]: I0126 16:04:14.833518 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdt72" event={"ID":"bcff6e54-9fbf-4859-92e4-cef4947df806","Type":"ContainerDied","Data":"d817680e8d2c363e0043325d4cdb8b9e9cd3b767c6038a74be0f4135fdec2a9d"} Jan 26 16:04:14 crc kubenswrapper[4823]: I0126 16:04:14.833550 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdt72" event={"ID":"bcff6e54-9fbf-4859-92e4-cef4947df806","Type":"ContainerStarted","Data":"8210e45a142d9c0f8c5638ccc46e9c488781747672b4933756d70860f29af36b"} Jan 26 16:04:14 crc kubenswrapper[4823]: I0126 16:04:14.835751 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:04:15 crc kubenswrapper[4823]: I0126 16:04:15.843973 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdt72" event={"ID":"bcff6e54-9fbf-4859-92e4-cef4947df806","Type":"ContainerStarted","Data":"efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0"} Jan 26 16:04:16 crc kubenswrapper[4823]: E0126 16:04:16.449698 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcff6e54_9fbf_4859_92e4_cef4947df806.slice/crio-efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0.scope\": RecentStats: unable to find data in memory cache]" Jan 26 16:04:17 crc kubenswrapper[4823]: I0126 16:04:17.863802 4823 generic.go:334] "Generic (PLEG): container finished" podID="bcff6e54-9fbf-4859-92e4-cef4947df806" containerID="efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0" exitCode=0 Jan 26 16:04:17 crc kubenswrapper[4823]: I0126 16:04:17.863884 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdt72" event={"ID":"bcff6e54-9fbf-4859-92e4-cef4947df806","Type":"ContainerDied","Data":"efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0"} Jan 26 16:04:18 crc kubenswrapper[4823]: I0126 16:04:18.876298 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdt72" event={"ID":"bcff6e54-9fbf-4859-92e4-cef4947df806","Type":"ContainerStarted","Data":"a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b"} Jan 26 16:04:18 crc kubenswrapper[4823]: I0126 16:04:18.902089 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zdt72" podStartSLOduration=2.076216697 podStartE2EDuration="5.902061612s" podCreationTimestamp="2026-01-26 16:04:13 +0000 UTC" firstStartedPulling="2026-01-26 16:04:14.835408956 +0000 UTC m=+4651.520872061" lastFinishedPulling="2026-01-26 16:04:18.661253861 +0000 UTC m=+4655.346716976" observedRunningTime="2026-01-26 16:04:18.896574152 +0000 UTC m=+4655.582037267" watchObservedRunningTime="2026-01-26 16:04:18.902061612 +0000 UTC m=+4655.587524727" Jan 26 16:04:20 crc kubenswrapper[4823]: I0126 16:04:20.560466 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:04:20 crc kubenswrapper[4823]: E0126 16:04:20.560955 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:04:24 crc kubenswrapper[4823]: I0126 16:04:24.049849 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:24 crc kubenswrapper[4823]: I0126 16:04:24.050393 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:25 crc kubenswrapper[4823]: I0126 16:04:25.133564 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zdt72" podUID="bcff6e54-9fbf-4859-92e4-cef4947df806" containerName="registry-server" probeResult="failure" output=< Jan 26 16:04:25 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 16:04:25 crc kubenswrapper[4823]: > Jan 26 16:04:33 crc kubenswrapper[4823]: I0126 16:04:33.569773 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:04:33 crc kubenswrapper[4823]: E0126 16:04:33.570751 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:04:34 crc kubenswrapper[4823]: I0126 16:04:34.319481 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:34 crc kubenswrapper[4823]: I0126 16:04:34.370720 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:34 crc kubenswrapper[4823]: I0126 16:04:34.572260 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdt72"] Jan 26 16:04:36 crc kubenswrapper[4823]: I0126 16:04:36.020925 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zdt72" podUID="bcff6e54-9fbf-4859-92e4-cef4947df806" containerName="registry-server" containerID="cri-o://a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b" gracePeriod=2 Jan 26 16:04:36 crc kubenswrapper[4823]: I0126 16:04:36.690386 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:36 crc kubenswrapper[4823]: I0126 16:04:36.814527 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fp4v\" (UniqueName: \"kubernetes.io/projected/bcff6e54-9fbf-4859-92e4-cef4947df806-kube-api-access-2fp4v\") pod \"bcff6e54-9fbf-4859-92e4-cef4947df806\" (UID: \"bcff6e54-9fbf-4859-92e4-cef4947df806\") " Jan 26 16:04:36 crc kubenswrapper[4823]: I0126 16:04:36.814586 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcff6e54-9fbf-4859-92e4-cef4947df806-catalog-content\") pod \"bcff6e54-9fbf-4859-92e4-cef4947df806\" (UID: \"bcff6e54-9fbf-4859-92e4-cef4947df806\") " Jan 26 16:04:36 crc kubenswrapper[4823]: I0126 16:04:36.814613 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcff6e54-9fbf-4859-92e4-cef4947df806-utilities\") pod \"bcff6e54-9fbf-4859-92e4-cef4947df806\" (UID: \"bcff6e54-9fbf-4859-92e4-cef4947df806\") " Jan 26 16:04:36 crc kubenswrapper[4823]: I0126 16:04:36.827145 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcff6e54-9fbf-4859-92e4-cef4947df806-utilities" (OuterVolumeSpecName: "utilities") pod "bcff6e54-9fbf-4859-92e4-cef4947df806" (UID: "bcff6e54-9fbf-4859-92e4-cef4947df806"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:36 crc kubenswrapper[4823]: I0126 16:04:36.871641 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcff6e54-9fbf-4859-92e4-cef4947df806-kube-api-access-2fp4v" (OuterVolumeSpecName: "kube-api-access-2fp4v") pod "bcff6e54-9fbf-4859-92e4-cef4947df806" (UID: "bcff6e54-9fbf-4859-92e4-cef4947df806"). InnerVolumeSpecName "kube-api-access-2fp4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:04:36 crc kubenswrapper[4823]: I0126 16:04:36.922185 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fp4v\" (UniqueName: \"kubernetes.io/projected/bcff6e54-9fbf-4859-92e4-cef4947df806-kube-api-access-2fp4v\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:36 crc kubenswrapper[4823]: I0126 16:04:36.922249 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bcff6e54-9fbf-4859-92e4-cef4947df806-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.010416 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcff6e54-9fbf-4859-92e4-cef4947df806-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bcff6e54-9fbf-4859-92e4-cef4947df806" (UID: "bcff6e54-9fbf-4859-92e4-cef4947df806"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.023945 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bcff6e54-9fbf-4859-92e4-cef4947df806-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.031129 4823 generic.go:334] "Generic (PLEG): container finished" podID="bcff6e54-9fbf-4859-92e4-cef4947df806" containerID="a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b" exitCode=0 Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.031174 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdt72" event={"ID":"bcff6e54-9fbf-4859-92e4-cef4947df806","Type":"ContainerDied","Data":"a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b"} Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.031194 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zdt72" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.031213 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zdt72" event={"ID":"bcff6e54-9fbf-4859-92e4-cef4947df806","Type":"ContainerDied","Data":"8210e45a142d9c0f8c5638ccc46e9c488781747672b4933756d70860f29af36b"} Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.031232 4823 scope.go:117] "RemoveContainer" containerID="a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.054865 4823 scope.go:117] "RemoveContainer" containerID="efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.073434 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zdt72"] Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.082173 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zdt72"] Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.102853 4823 scope.go:117] "RemoveContainer" containerID="d817680e8d2c363e0043325d4cdb8b9e9cd3b767c6038a74be0f4135fdec2a9d" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.124542 4823 scope.go:117] "RemoveContainer" containerID="a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b" Jan 26 16:04:37 crc kubenswrapper[4823]: E0126 16:04:37.126113 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b\": container with ID starting with a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b not found: ID does not exist" containerID="a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.126182 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b"} err="failed to get container status \"a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b\": rpc error: code = NotFound desc = could not find container \"a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b\": container with ID starting with a977f845116058eefb9449c780c31d489b6431b364bd5459872e70b73feca15b not found: ID does not exist" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.126209 4823 scope.go:117] "RemoveContainer" containerID="efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0" Jan 26 16:04:37 crc kubenswrapper[4823]: E0126 16:04:37.126732 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0\": container with ID starting with efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0 not found: ID does not exist" containerID="efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.126768 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0"} err="failed to get container status \"efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0\": rpc error: code = NotFound desc = could not find container \"efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0\": container with ID starting with efbd1a9fff747a7791971f4fa23f065cf757aa790d825668b1ef6e4965d07ff0 not found: ID does not exist" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.126807 4823 scope.go:117] "RemoveContainer" containerID="d817680e8d2c363e0043325d4cdb8b9e9cd3b767c6038a74be0f4135fdec2a9d" Jan 26 16:04:37 crc kubenswrapper[4823]: E0126 16:04:37.127065 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d817680e8d2c363e0043325d4cdb8b9e9cd3b767c6038a74be0f4135fdec2a9d\": container with ID starting with d817680e8d2c363e0043325d4cdb8b9e9cd3b767c6038a74be0f4135fdec2a9d not found: ID does not exist" containerID="d817680e8d2c363e0043325d4cdb8b9e9cd3b767c6038a74be0f4135fdec2a9d" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.127086 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d817680e8d2c363e0043325d4cdb8b9e9cd3b767c6038a74be0f4135fdec2a9d"} err="failed to get container status \"d817680e8d2c363e0043325d4cdb8b9e9cd3b767c6038a74be0f4135fdec2a9d\": rpc error: code = NotFound desc = could not find container \"d817680e8d2c363e0043325d4cdb8b9e9cd3b767c6038a74be0f4135fdec2a9d\": container with ID starting with d817680e8d2c363e0043325d4cdb8b9e9cd3b767c6038a74be0f4135fdec2a9d not found: ID does not exist" Jan 26 16:04:37 crc kubenswrapper[4823]: I0126 16:04:37.570593 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcff6e54-9fbf-4859-92e4-cef4947df806" path="/var/lib/kubelet/pods/bcff6e54-9fbf-4859-92e4-cef4947df806/volumes" Jan 26 16:04:44 crc kubenswrapper[4823]: I0126 16:04:44.560529 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:04:44 crc kubenswrapper[4823]: E0126 16:04:44.561274 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:04:59 crc kubenswrapper[4823]: I0126 16:04:59.561036 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:04:59 crc kubenswrapper[4823]: E0126 16:04:59.562719 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:05:12 crc kubenswrapper[4823]: I0126 16:05:12.560514 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:05:12 crc kubenswrapper[4823]: E0126 16:05:12.561452 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:05:27 crc kubenswrapper[4823]: I0126 16:05:27.560525 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:05:27 crc kubenswrapper[4823]: E0126 16:05:27.561319 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:05:39 crc kubenswrapper[4823]: I0126 16:05:39.561002 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:05:39 crc kubenswrapper[4823]: I0126 16:05:39.946686 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"57c744fded81824a856e0c69bb0cb1bbdf50a5f8de5439f42c6be8e3a437a1ea"} Jan 26 16:08:04 crc kubenswrapper[4823]: I0126 16:08:04.507959 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:08:04 crc kubenswrapper[4823]: I0126 16:08:04.511032 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:08:34 crc kubenswrapper[4823]: I0126 16:08:34.508660 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:08:34 crc kubenswrapper[4823]: I0126 16:08:34.509175 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:09:04 crc kubenswrapper[4823]: I0126 16:09:04.508127 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:09:04 crc kubenswrapper[4823]: I0126 16:09:04.509713 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:09:04 crc kubenswrapper[4823]: I0126 16:09:04.510012 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 16:09:04 crc kubenswrapper[4823]: I0126 16:09:04.510903 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57c744fded81824a856e0c69bb0cb1bbdf50a5f8de5439f42c6be8e3a437a1ea"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:09:04 crc kubenswrapper[4823]: I0126 16:09:04.511041 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://57c744fded81824a856e0c69bb0cb1bbdf50a5f8de5439f42c6be8e3a437a1ea" gracePeriod=600 Jan 26 16:09:05 crc kubenswrapper[4823]: I0126 16:09:05.034001 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="57c744fded81824a856e0c69bb0cb1bbdf50a5f8de5439f42c6be8e3a437a1ea" exitCode=0 Jan 26 16:09:05 crc kubenswrapper[4823]: I0126 16:09:05.034054 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"57c744fded81824a856e0c69bb0cb1bbdf50a5f8de5439f42c6be8e3a437a1ea"} Jan 26 16:09:05 crc kubenswrapper[4823]: I0126 16:09:05.034347 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8"} Jan 26 16:09:05 crc kubenswrapper[4823]: I0126 16:09:05.034398 4823 scope.go:117] "RemoveContainer" containerID="ec713045883a4d2f5f9465e721f868fe9786a749e8d44306c832e5662fa48c72" Jan 26 16:11:04 crc kubenswrapper[4823]: I0126 16:11:04.507987 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:11:04 crc kubenswrapper[4823]: I0126 16:11:04.508585 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.680034 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g2ddl"] Jan 26 16:11:13 crc kubenswrapper[4823]: E0126 16:11:13.680784 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcff6e54-9fbf-4859-92e4-cef4947df806" containerName="registry-server" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.680796 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcff6e54-9fbf-4859-92e4-cef4947df806" containerName="registry-server" Jan 26 16:11:13 crc kubenswrapper[4823]: E0126 16:11:13.680818 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcff6e54-9fbf-4859-92e4-cef4947df806" containerName="extract-content" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.680825 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcff6e54-9fbf-4859-92e4-cef4947df806" containerName="extract-content" Jan 26 16:11:13 crc kubenswrapper[4823]: E0126 16:11:13.680852 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcff6e54-9fbf-4859-92e4-cef4947df806" containerName="extract-utilities" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.680858 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcff6e54-9fbf-4859-92e4-cef4947df806" containerName="extract-utilities" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.681013 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcff6e54-9fbf-4859-92e4-cef4947df806" containerName="registry-server" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.682162 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.711077 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g2ddl"] Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.803016 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rvk8\" (UniqueName: \"kubernetes.io/projected/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-kube-api-access-7rvk8\") pod \"community-operators-g2ddl\" (UID: \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\") " pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.803097 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-catalog-content\") pod \"community-operators-g2ddl\" (UID: \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\") " pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.803325 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-utilities\") pod \"community-operators-g2ddl\" (UID: \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\") " pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.905661 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rvk8\" (UniqueName: \"kubernetes.io/projected/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-kube-api-access-7rvk8\") pod \"community-operators-g2ddl\" (UID: \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\") " pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.905707 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-catalog-content\") pod \"community-operators-g2ddl\" (UID: \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\") " pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.905758 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-utilities\") pod \"community-operators-g2ddl\" (UID: \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\") " pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.906306 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-catalog-content\") pod \"community-operators-g2ddl\" (UID: \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\") " pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.906315 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-utilities\") pod \"community-operators-g2ddl\" (UID: \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\") " pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:13 crc kubenswrapper[4823]: I0126 16:11:13.924467 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rvk8\" (UniqueName: \"kubernetes.io/projected/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-kube-api-access-7rvk8\") pod \"community-operators-g2ddl\" (UID: \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\") " pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.008360 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.552464 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g2ddl"] Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.685553 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k24fk"] Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.688316 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.702082 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k24fk"] Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.829005 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlvvd\" (UniqueName: \"kubernetes.io/projected/cbd35b20-5080-49c3-bd39-3aeb70707b15-kube-api-access-dlvvd\") pod \"certified-operators-k24fk\" (UID: \"cbd35b20-5080-49c3-bd39-3aeb70707b15\") " pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.829506 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd35b20-5080-49c3-bd39-3aeb70707b15-catalog-content\") pod \"certified-operators-k24fk\" (UID: \"cbd35b20-5080-49c3-bd39-3aeb70707b15\") " pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.829602 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd35b20-5080-49c3-bd39-3aeb70707b15-utilities\") pod \"certified-operators-k24fk\" (UID: \"cbd35b20-5080-49c3-bd39-3aeb70707b15\") " pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.931472 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd35b20-5080-49c3-bd39-3aeb70707b15-catalog-content\") pod \"certified-operators-k24fk\" (UID: \"cbd35b20-5080-49c3-bd39-3aeb70707b15\") " pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.931574 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd35b20-5080-49c3-bd39-3aeb70707b15-utilities\") pod \"certified-operators-k24fk\" (UID: \"cbd35b20-5080-49c3-bd39-3aeb70707b15\") " pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.931643 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlvvd\" (UniqueName: \"kubernetes.io/projected/cbd35b20-5080-49c3-bd39-3aeb70707b15-kube-api-access-dlvvd\") pod \"certified-operators-k24fk\" (UID: \"cbd35b20-5080-49c3-bd39-3aeb70707b15\") " pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.931913 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd35b20-5080-49c3-bd39-3aeb70707b15-catalog-content\") pod \"certified-operators-k24fk\" (UID: \"cbd35b20-5080-49c3-bd39-3aeb70707b15\") " pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.932039 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd35b20-5080-49c3-bd39-3aeb70707b15-utilities\") pod \"certified-operators-k24fk\" (UID: \"cbd35b20-5080-49c3-bd39-3aeb70707b15\") " pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:14 crc kubenswrapper[4823]: I0126 16:11:14.950421 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlvvd\" (UniqueName: \"kubernetes.io/projected/cbd35b20-5080-49c3-bd39-3aeb70707b15-kube-api-access-dlvvd\") pod \"certified-operators-k24fk\" (UID: \"cbd35b20-5080-49c3-bd39-3aeb70707b15\") " pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:15 crc kubenswrapper[4823]: I0126 16:11:15.036677 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:15 crc kubenswrapper[4823]: I0126 16:11:15.497428 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k24fk"] Jan 26 16:11:15 crc kubenswrapper[4823]: I0126 16:11:15.547256 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24fk" event={"ID":"cbd35b20-5080-49c3-bd39-3aeb70707b15","Type":"ContainerStarted","Data":"a2c7fe1bf80c36fa27741a2d12e668ad89bd2040322db79996398e64058647c6"} Jan 26 16:11:15 crc kubenswrapper[4823]: I0126 16:11:15.548838 4823 generic.go:334] "Generic (PLEG): container finished" podID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" containerID="5ff87a14d09c1fc4a0e0354100853c42e090d054153aec32a9b7d4f716733ba8" exitCode=0 Jan 26 16:11:15 crc kubenswrapper[4823]: I0126 16:11:15.548870 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2ddl" event={"ID":"44cd1aea-09c6-4601-a9a1-a513e97bbdbb","Type":"ContainerDied","Data":"5ff87a14d09c1fc4a0e0354100853c42e090d054153aec32a9b7d4f716733ba8"} Jan 26 16:11:15 crc kubenswrapper[4823]: I0126 16:11:15.548886 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2ddl" event={"ID":"44cd1aea-09c6-4601-a9a1-a513e97bbdbb","Type":"ContainerStarted","Data":"f4d21d150c73b1eb941c61a30fa24e684bbf555e758444b8873a7b8d228275a4"} Jan 26 16:11:15 crc kubenswrapper[4823]: I0126 16:11:15.559603 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.088942 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hc4gg"] Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.091757 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.103535 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hc4gg"] Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.157393 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-catalog-content\") pod \"redhat-marketplace-hc4gg\" (UID: \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\") " pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.157453 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2hjw\" (UniqueName: \"kubernetes.io/projected/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-kube-api-access-t2hjw\") pod \"redhat-marketplace-hc4gg\" (UID: \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\") " pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.157492 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-utilities\") pod \"redhat-marketplace-hc4gg\" (UID: \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\") " pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.259093 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-catalog-content\") pod \"redhat-marketplace-hc4gg\" (UID: \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\") " pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.259156 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2hjw\" (UniqueName: \"kubernetes.io/projected/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-kube-api-access-t2hjw\") pod \"redhat-marketplace-hc4gg\" (UID: \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\") " pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.259194 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-utilities\") pod \"redhat-marketplace-hc4gg\" (UID: \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\") " pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.259644 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-utilities\") pod \"redhat-marketplace-hc4gg\" (UID: \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\") " pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.259875 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-catalog-content\") pod \"redhat-marketplace-hc4gg\" (UID: \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\") " pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.293258 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2hjw\" (UniqueName: \"kubernetes.io/projected/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-kube-api-access-t2hjw\") pod \"redhat-marketplace-hc4gg\" (UID: \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\") " pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.412182 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.556950 4823 generic.go:334] "Generic (PLEG): container finished" podID="cbd35b20-5080-49c3-bd39-3aeb70707b15" containerID="6504cfd7feedfd3e6bf34b05b956ea395f8334dbca41e2feaadb1c9386072027" exitCode=0 Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.557047 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24fk" event={"ID":"cbd35b20-5080-49c3-bd39-3aeb70707b15","Type":"ContainerDied","Data":"6504cfd7feedfd3e6bf34b05b956ea395f8334dbca41e2feaadb1c9386072027"} Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.558937 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2ddl" event={"ID":"44cd1aea-09c6-4601-a9a1-a513e97bbdbb","Type":"ContainerStarted","Data":"d9d79dec0ed1fa129042b66a0b5a73ae8d685de65c763fad5691a8ccbaebfce8"} Jan 26 16:11:16 crc kubenswrapper[4823]: I0126 16:11:16.967478 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hc4gg"] Jan 26 16:11:17 crc kubenswrapper[4823]: I0126 16:11:17.568767 4823 generic.go:334] "Generic (PLEG): container finished" podID="f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" containerID="038ff2f4de75d3bf6e8ee4398e4a0065e872bd21d76d08a0ac55745a57cfdefc" exitCode=0 Jan 26 16:11:17 crc kubenswrapper[4823]: I0126 16:11:17.571437 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc4gg" event={"ID":"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82","Type":"ContainerDied","Data":"038ff2f4de75d3bf6e8ee4398e4a0065e872bd21d76d08a0ac55745a57cfdefc"} Jan 26 16:11:17 crc kubenswrapper[4823]: I0126 16:11:17.571488 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc4gg" event={"ID":"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82","Type":"ContainerStarted","Data":"5e4b6d0131f74ba46bac5fc865dda268ec1f9f48a080cbab2a5ab8cc6bcf2de7"} Jan 26 16:11:18 crc kubenswrapper[4823]: I0126 16:11:18.588724 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2ddl" event={"ID":"44cd1aea-09c6-4601-a9a1-a513e97bbdbb","Type":"ContainerDied","Data":"d9d79dec0ed1fa129042b66a0b5a73ae8d685de65c763fad5691a8ccbaebfce8"} Jan 26 16:11:18 crc kubenswrapper[4823]: I0126 16:11:18.588658 4823 generic.go:334] "Generic (PLEG): container finished" podID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" containerID="d9d79dec0ed1fa129042b66a0b5a73ae8d685de65c763fad5691a8ccbaebfce8" exitCode=0 Jan 26 16:11:18 crc kubenswrapper[4823]: I0126 16:11:18.600215 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24fk" event={"ID":"cbd35b20-5080-49c3-bd39-3aeb70707b15","Type":"ContainerStarted","Data":"868ad222dbc89d258c4c68bf6cecd12ec99dbbac3aea83abb69034cfa31574cf"} Jan 26 16:11:19 crc kubenswrapper[4823]: I0126 16:11:19.611719 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc4gg" event={"ID":"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82","Type":"ContainerStarted","Data":"09ab1e599264c5b19cdbfd1dcdb4a2d550ff9ce61931cbfb28555d126e8f31c8"} Jan 26 16:11:19 crc kubenswrapper[4823]: I0126 16:11:19.616622 4823 generic.go:334] "Generic (PLEG): container finished" podID="cbd35b20-5080-49c3-bd39-3aeb70707b15" containerID="868ad222dbc89d258c4c68bf6cecd12ec99dbbac3aea83abb69034cfa31574cf" exitCode=0 Jan 26 16:11:19 crc kubenswrapper[4823]: I0126 16:11:19.616662 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24fk" event={"ID":"cbd35b20-5080-49c3-bd39-3aeb70707b15","Type":"ContainerDied","Data":"868ad222dbc89d258c4c68bf6cecd12ec99dbbac3aea83abb69034cfa31574cf"} Jan 26 16:11:20 crc kubenswrapper[4823]: I0126 16:11:20.626640 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2ddl" event={"ID":"44cd1aea-09c6-4601-a9a1-a513e97bbdbb","Type":"ContainerStarted","Data":"cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b"} Jan 26 16:11:20 crc kubenswrapper[4823]: I0126 16:11:20.628180 4823 generic.go:334] "Generic (PLEG): container finished" podID="f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" containerID="09ab1e599264c5b19cdbfd1dcdb4a2d550ff9ce61931cbfb28555d126e8f31c8" exitCode=0 Jan 26 16:11:20 crc kubenswrapper[4823]: I0126 16:11:20.628240 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc4gg" event={"ID":"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82","Type":"ContainerDied","Data":"09ab1e599264c5b19cdbfd1dcdb4a2d550ff9ce61931cbfb28555d126e8f31c8"} Jan 26 16:11:20 crc kubenswrapper[4823]: I0126 16:11:20.631898 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24fk" event={"ID":"cbd35b20-5080-49c3-bd39-3aeb70707b15","Type":"ContainerStarted","Data":"496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91"} Jan 26 16:11:20 crc kubenswrapper[4823]: I0126 16:11:20.651667 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g2ddl" podStartSLOduration=3.690306727 podStartE2EDuration="7.651650305s" podCreationTimestamp="2026-01-26 16:11:13 +0000 UTC" firstStartedPulling="2026-01-26 16:11:15.559300696 +0000 UTC m=+5072.244763791" lastFinishedPulling="2026-01-26 16:11:19.520644264 +0000 UTC m=+5076.206107369" observedRunningTime="2026-01-26 16:11:20.642496035 +0000 UTC m=+5077.327959140" watchObservedRunningTime="2026-01-26 16:11:20.651650305 +0000 UTC m=+5077.337113410" Jan 26 16:11:20 crc kubenswrapper[4823]: I0126 16:11:20.698781 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k24fk" podStartSLOduration=2.908858182 podStartE2EDuration="6.698757223s" podCreationTimestamp="2026-01-26 16:11:14 +0000 UTC" firstStartedPulling="2026-01-26 16:11:16.558941178 +0000 UTC m=+5073.244404283" lastFinishedPulling="2026-01-26 16:11:20.348840209 +0000 UTC m=+5077.034303324" observedRunningTime="2026-01-26 16:11:20.692532983 +0000 UTC m=+5077.377996088" watchObservedRunningTime="2026-01-26 16:11:20.698757223 +0000 UTC m=+5077.384220328" Jan 26 16:11:21 crc kubenswrapper[4823]: I0126 16:11:21.642438 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc4gg" event={"ID":"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82","Type":"ContainerStarted","Data":"2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7"} Jan 26 16:11:21 crc kubenswrapper[4823]: I0126 16:11:21.663486 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hc4gg" podStartSLOduration=1.9393573160000002 podStartE2EDuration="5.663469069s" podCreationTimestamp="2026-01-26 16:11:16 +0000 UTC" firstStartedPulling="2026-01-26 16:11:17.571981145 +0000 UTC m=+5074.257444250" lastFinishedPulling="2026-01-26 16:11:21.296092898 +0000 UTC m=+5077.981556003" observedRunningTime="2026-01-26 16:11:21.658471303 +0000 UTC m=+5078.343934418" watchObservedRunningTime="2026-01-26 16:11:21.663469069 +0000 UTC m=+5078.348932174" Jan 26 16:11:24 crc kubenswrapper[4823]: I0126 16:11:24.008959 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:24 crc kubenswrapper[4823]: I0126 16:11:24.009484 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:25 crc kubenswrapper[4823]: I0126 16:11:25.036984 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:25 crc kubenswrapper[4823]: I0126 16:11:25.037023 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:25 crc kubenswrapper[4823]: I0126 16:11:25.055507 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-g2ddl" podUID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" containerName="registry-server" probeResult="failure" output=< Jan 26 16:11:25 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 16:11:25 crc kubenswrapper[4823]: > Jan 26 16:11:25 crc kubenswrapper[4823]: I0126 16:11:25.425021 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:25 crc kubenswrapper[4823]: I0126 16:11:25.733281 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:26 crc kubenswrapper[4823]: I0126 16:11:26.412599 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:26 crc kubenswrapper[4823]: I0126 16:11:26.412649 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:26 crc kubenswrapper[4823]: I0126 16:11:26.463061 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:26 crc kubenswrapper[4823]: I0126 16:11:26.674091 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k24fk"] Jan 26 16:11:27 crc kubenswrapper[4823]: I0126 16:11:27.122556 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:27 crc kubenswrapper[4823]: I0126 16:11:27.699234 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k24fk" podUID="cbd35b20-5080-49c3-bd39-3aeb70707b15" containerName="registry-server" containerID="cri-o://496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91" gracePeriod=2 Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.321357 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.508893 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd35b20-5080-49c3-bd39-3aeb70707b15-catalog-content\") pod \"cbd35b20-5080-49c3-bd39-3aeb70707b15\" (UID: \"cbd35b20-5080-49c3-bd39-3aeb70707b15\") " Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.509101 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlvvd\" (UniqueName: \"kubernetes.io/projected/cbd35b20-5080-49c3-bd39-3aeb70707b15-kube-api-access-dlvvd\") pod \"cbd35b20-5080-49c3-bd39-3aeb70707b15\" (UID: \"cbd35b20-5080-49c3-bd39-3aeb70707b15\") " Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.509185 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd35b20-5080-49c3-bd39-3aeb70707b15-utilities\") pod \"cbd35b20-5080-49c3-bd39-3aeb70707b15\" (UID: \"cbd35b20-5080-49c3-bd39-3aeb70707b15\") " Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.509939 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd35b20-5080-49c3-bd39-3aeb70707b15-utilities" (OuterVolumeSpecName: "utilities") pod "cbd35b20-5080-49c3-bd39-3aeb70707b15" (UID: "cbd35b20-5080-49c3-bd39-3aeb70707b15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.515488 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd35b20-5080-49c3-bd39-3aeb70707b15-kube-api-access-dlvvd" (OuterVolumeSpecName: "kube-api-access-dlvvd") pod "cbd35b20-5080-49c3-bd39-3aeb70707b15" (UID: "cbd35b20-5080-49c3-bd39-3aeb70707b15"). InnerVolumeSpecName "kube-api-access-dlvvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.556136 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd35b20-5080-49c3-bd39-3aeb70707b15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbd35b20-5080-49c3-bd39-3aeb70707b15" (UID: "cbd35b20-5080-49c3-bd39-3aeb70707b15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.612092 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlvvd\" (UniqueName: \"kubernetes.io/projected/cbd35b20-5080-49c3-bd39-3aeb70707b15-kube-api-access-dlvvd\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.612125 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbd35b20-5080-49c3-bd39-3aeb70707b15-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.612134 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbd35b20-5080-49c3-bd39-3aeb70707b15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.708848 4823 generic.go:334] "Generic (PLEG): container finished" podID="cbd35b20-5080-49c3-bd39-3aeb70707b15" containerID="496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91" exitCode=0 Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.708886 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24fk" event={"ID":"cbd35b20-5080-49c3-bd39-3aeb70707b15","Type":"ContainerDied","Data":"496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91"} Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.708924 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k24fk" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.709128 4823 scope.go:117] "RemoveContainer" containerID="496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.709117 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k24fk" event={"ID":"cbd35b20-5080-49c3-bd39-3aeb70707b15","Type":"ContainerDied","Data":"a2c7fe1bf80c36fa27741a2d12e668ad89bd2040322db79996398e64058647c6"} Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.748754 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k24fk"] Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.749670 4823 scope.go:117] "RemoveContainer" containerID="868ad222dbc89d258c4c68bf6cecd12ec99dbbac3aea83abb69034cfa31574cf" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.758141 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k24fk"] Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.777556 4823 scope.go:117] "RemoveContainer" containerID="6504cfd7feedfd3e6bf34b05b956ea395f8334dbca41e2feaadb1c9386072027" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.818339 4823 scope.go:117] "RemoveContainer" containerID="496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91" Jan 26 16:11:28 crc kubenswrapper[4823]: E0126 16:11:28.819175 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91\": container with ID starting with 496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91 not found: ID does not exist" containerID="496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.819207 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91"} err="failed to get container status \"496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91\": rpc error: code = NotFound desc = could not find container \"496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91\": container with ID starting with 496d497c566d487b1eea63c430565a68fb3824d9a7566a89b5f32478cf464e91 not found: ID does not exist" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.819227 4823 scope.go:117] "RemoveContainer" containerID="868ad222dbc89d258c4c68bf6cecd12ec99dbbac3aea83abb69034cfa31574cf" Jan 26 16:11:28 crc kubenswrapper[4823]: E0126 16:11:28.820262 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868ad222dbc89d258c4c68bf6cecd12ec99dbbac3aea83abb69034cfa31574cf\": container with ID starting with 868ad222dbc89d258c4c68bf6cecd12ec99dbbac3aea83abb69034cfa31574cf not found: ID does not exist" containerID="868ad222dbc89d258c4c68bf6cecd12ec99dbbac3aea83abb69034cfa31574cf" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.820293 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868ad222dbc89d258c4c68bf6cecd12ec99dbbac3aea83abb69034cfa31574cf"} err="failed to get container status \"868ad222dbc89d258c4c68bf6cecd12ec99dbbac3aea83abb69034cfa31574cf\": rpc error: code = NotFound desc = could not find container \"868ad222dbc89d258c4c68bf6cecd12ec99dbbac3aea83abb69034cfa31574cf\": container with ID starting with 868ad222dbc89d258c4c68bf6cecd12ec99dbbac3aea83abb69034cfa31574cf not found: ID does not exist" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.820312 4823 scope.go:117] "RemoveContainer" containerID="6504cfd7feedfd3e6bf34b05b956ea395f8334dbca41e2feaadb1c9386072027" Jan 26 16:11:28 crc kubenswrapper[4823]: E0126 16:11:28.820933 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6504cfd7feedfd3e6bf34b05b956ea395f8334dbca41e2feaadb1c9386072027\": container with ID starting with 6504cfd7feedfd3e6bf34b05b956ea395f8334dbca41e2feaadb1c9386072027 not found: ID does not exist" containerID="6504cfd7feedfd3e6bf34b05b956ea395f8334dbca41e2feaadb1c9386072027" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.820953 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6504cfd7feedfd3e6bf34b05b956ea395f8334dbca41e2feaadb1c9386072027"} err="failed to get container status \"6504cfd7feedfd3e6bf34b05b956ea395f8334dbca41e2feaadb1c9386072027\": rpc error: code = NotFound desc = could not find container \"6504cfd7feedfd3e6bf34b05b956ea395f8334dbca41e2feaadb1c9386072027\": container with ID starting with 6504cfd7feedfd3e6bf34b05b956ea395f8334dbca41e2feaadb1c9386072027 not found: ID does not exist" Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.873767 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hc4gg"] Jan 26 16:11:28 crc kubenswrapper[4823]: I0126 16:11:28.874083 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hc4gg" podUID="f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" containerName="registry-server" containerID="cri-o://2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7" gracePeriod=2 Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.522620 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.577029 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd35b20-5080-49c3-bd39-3aeb70707b15" path="/var/lib/kubelet/pods/cbd35b20-5080-49c3-bd39-3aeb70707b15/volumes" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.644775 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-catalog-content\") pod \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\" (UID: \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\") " Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.644903 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2hjw\" (UniqueName: \"kubernetes.io/projected/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-kube-api-access-t2hjw\") pod \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\" (UID: \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\") " Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.644977 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-utilities\") pod \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\" (UID: \"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82\") " Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.646024 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-utilities" (OuterVolumeSpecName: "utilities") pod "f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" (UID: "f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.654658 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-kube-api-access-t2hjw" (OuterVolumeSpecName: "kube-api-access-t2hjw") pod "f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" (UID: "f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82"). InnerVolumeSpecName "kube-api-access-t2hjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.665461 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" (UID: "f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.719884 4823 generic.go:334] "Generic (PLEG): container finished" podID="f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" containerID="2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7" exitCode=0 Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.719955 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc4gg" event={"ID":"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82","Type":"ContainerDied","Data":"2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7"} Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.719987 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc4gg" event={"ID":"f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82","Type":"ContainerDied","Data":"5e4b6d0131f74ba46bac5fc865dda268ec1f9f48a080cbab2a5ab8cc6bcf2de7"} Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.720006 4823 scope.go:117] "RemoveContainer" containerID="2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.720131 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hc4gg" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.746238 4823 scope.go:117] "RemoveContainer" containerID="09ab1e599264c5b19cdbfd1dcdb4a2d550ff9ce61931cbfb28555d126e8f31c8" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.747657 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.747689 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2hjw\" (UniqueName: \"kubernetes.io/projected/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-kube-api-access-t2hjw\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.747703 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.750786 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hc4gg"] Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.759867 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hc4gg"] Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.779862 4823 scope.go:117] "RemoveContainer" containerID="038ff2f4de75d3bf6e8ee4398e4a0065e872bd21d76d08a0ac55745a57cfdefc" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.798471 4823 scope.go:117] "RemoveContainer" containerID="2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7" Jan 26 16:11:29 crc kubenswrapper[4823]: E0126 16:11:29.799014 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7\": container with ID starting with 2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7 not found: ID does not exist" containerID="2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.799068 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7"} err="failed to get container status \"2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7\": rpc error: code = NotFound desc = could not find container \"2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7\": container with ID starting with 2fe57fe0cc32b6cc84f35d745ffb37b9a06f00560c0c11fadb3077e494bb81a7 not found: ID does not exist" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.799110 4823 scope.go:117] "RemoveContainer" containerID="09ab1e599264c5b19cdbfd1dcdb4a2d550ff9ce61931cbfb28555d126e8f31c8" Jan 26 16:11:29 crc kubenswrapper[4823]: E0126 16:11:29.799941 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09ab1e599264c5b19cdbfd1dcdb4a2d550ff9ce61931cbfb28555d126e8f31c8\": container with ID starting with 09ab1e599264c5b19cdbfd1dcdb4a2d550ff9ce61931cbfb28555d126e8f31c8 not found: ID does not exist" containerID="09ab1e599264c5b19cdbfd1dcdb4a2d550ff9ce61931cbfb28555d126e8f31c8" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.799988 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09ab1e599264c5b19cdbfd1dcdb4a2d550ff9ce61931cbfb28555d126e8f31c8"} err="failed to get container status \"09ab1e599264c5b19cdbfd1dcdb4a2d550ff9ce61931cbfb28555d126e8f31c8\": rpc error: code = NotFound desc = could not find container \"09ab1e599264c5b19cdbfd1dcdb4a2d550ff9ce61931cbfb28555d126e8f31c8\": container with ID starting with 09ab1e599264c5b19cdbfd1dcdb4a2d550ff9ce61931cbfb28555d126e8f31c8 not found: ID does not exist" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.800022 4823 scope.go:117] "RemoveContainer" containerID="038ff2f4de75d3bf6e8ee4398e4a0065e872bd21d76d08a0ac55745a57cfdefc" Jan 26 16:11:29 crc kubenswrapper[4823]: E0126 16:11:29.800425 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"038ff2f4de75d3bf6e8ee4398e4a0065e872bd21d76d08a0ac55745a57cfdefc\": container with ID starting with 038ff2f4de75d3bf6e8ee4398e4a0065e872bd21d76d08a0ac55745a57cfdefc not found: ID does not exist" containerID="038ff2f4de75d3bf6e8ee4398e4a0065e872bd21d76d08a0ac55745a57cfdefc" Jan 26 16:11:29 crc kubenswrapper[4823]: I0126 16:11:29.800447 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"038ff2f4de75d3bf6e8ee4398e4a0065e872bd21d76d08a0ac55745a57cfdefc"} err="failed to get container status \"038ff2f4de75d3bf6e8ee4398e4a0065e872bd21d76d08a0ac55745a57cfdefc\": rpc error: code = NotFound desc = could not find container \"038ff2f4de75d3bf6e8ee4398e4a0065e872bd21d76d08a0ac55745a57cfdefc\": container with ID starting with 038ff2f4de75d3bf6e8ee4398e4a0065e872bd21d76d08a0ac55745a57cfdefc not found: ID does not exist" Jan 26 16:11:31 crc kubenswrapper[4823]: I0126 16:11:31.571848 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" path="/var/lib/kubelet/pods/f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82/volumes" Jan 26 16:11:34 crc kubenswrapper[4823]: I0126 16:11:34.058593 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:34 crc kubenswrapper[4823]: I0126 16:11:34.114151 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:34 crc kubenswrapper[4823]: I0126 16:11:34.507838 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:11:34 crc kubenswrapper[4823]: I0126 16:11:34.508249 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:11:35 crc kubenswrapper[4823]: I0126 16:11:35.072513 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g2ddl"] Jan 26 16:11:35 crc kubenswrapper[4823]: I0126 16:11:35.782188 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g2ddl" podUID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" containerName="registry-server" containerID="cri-o://cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b" gracePeriod=2 Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.434120 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.580734 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-catalog-content\") pod \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\" (UID: \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\") " Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.580958 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rvk8\" (UniqueName: \"kubernetes.io/projected/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-kube-api-access-7rvk8\") pod \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\" (UID: \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\") " Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.581073 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-utilities\") pod \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\" (UID: \"44cd1aea-09c6-4601-a9a1-a513e97bbdbb\") " Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.583927 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-utilities" (OuterVolumeSpecName: "utilities") pod "44cd1aea-09c6-4601-a9a1-a513e97bbdbb" (UID: "44cd1aea-09c6-4601-a9a1-a513e97bbdbb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.607356 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-kube-api-access-7rvk8" (OuterVolumeSpecName: "kube-api-access-7rvk8") pod "44cd1aea-09c6-4601-a9a1-a513e97bbdbb" (UID: "44cd1aea-09c6-4601-a9a1-a513e97bbdbb"). InnerVolumeSpecName "kube-api-access-7rvk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.638759 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44cd1aea-09c6-4601-a9a1-a513e97bbdbb" (UID: "44cd1aea-09c6-4601-a9a1-a513e97bbdbb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.683448 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rvk8\" (UniqueName: \"kubernetes.io/projected/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-kube-api-access-7rvk8\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.683494 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.683510 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44cd1aea-09c6-4601-a9a1-a513e97bbdbb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.793411 4823 generic.go:334] "Generic (PLEG): container finished" podID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" containerID="cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b" exitCode=0 Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.793464 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2ddl" event={"ID":"44cd1aea-09c6-4601-a9a1-a513e97bbdbb","Type":"ContainerDied","Data":"cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b"} Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.793499 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g2ddl" event={"ID":"44cd1aea-09c6-4601-a9a1-a513e97bbdbb","Type":"ContainerDied","Data":"f4d21d150c73b1eb941c61a30fa24e684bbf555e758444b8873a7b8d228275a4"} Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.793521 4823 scope.go:117] "RemoveContainer" containerID="cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b" Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.793526 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g2ddl" Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.819486 4823 scope.go:117] "RemoveContainer" containerID="d9d79dec0ed1fa129042b66a0b5a73ae8d685de65c763fad5691a8ccbaebfce8" Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.842819 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g2ddl"] Jan 26 16:11:36 crc kubenswrapper[4823]: I0126 16:11:36.857519 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g2ddl"] Jan 26 16:11:36 crc kubenswrapper[4823]: E0126 16:11:36.894555 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44cd1aea_09c6_4601_a9a1_a513e97bbdbb.slice\": RecentStats: unable to find data in memory cache]" Jan 26 16:11:37 crc kubenswrapper[4823]: I0126 16:11:37.300503 4823 scope.go:117] "RemoveContainer" containerID="5ff87a14d09c1fc4a0e0354100853c42e090d054153aec32a9b7d4f716733ba8" Jan 26 16:11:37 crc kubenswrapper[4823]: I0126 16:11:37.443658 4823 scope.go:117] "RemoveContainer" containerID="cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b" Jan 26 16:11:37 crc kubenswrapper[4823]: E0126 16:11:37.444174 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b\": container with ID starting with cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b not found: ID does not exist" containerID="cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b" Jan 26 16:11:37 crc kubenswrapper[4823]: I0126 16:11:37.444199 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b"} err="failed to get container status \"cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b\": rpc error: code = NotFound desc = could not find container \"cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b\": container with ID starting with cb90a8a017d18d93e15e84a88437c0e0a93c20414612cef62c91759203cc6d3b not found: ID does not exist" Jan 26 16:11:37 crc kubenswrapper[4823]: I0126 16:11:37.444225 4823 scope.go:117] "RemoveContainer" containerID="d9d79dec0ed1fa129042b66a0b5a73ae8d685de65c763fad5691a8ccbaebfce8" Jan 26 16:11:37 crc kubenswrapper[4823]: E0126 16:11:37.444810 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9d79dec0ed1fa129042b66a0b5a73ae8d685de65c763fad5691a8ccbaebfce8\": container with ID starting with d9d79dec0ed1fa129042b66a0b5a73ae8d685de65c763fad5691a8ccbaebfce8 not found: ID does not exist" containerID="d9d79dec0ed1fa129042b66a0b5a73ae8d685de65c763fad5691a8ccbaebfce8" Jan 26 16:11:37 crc kubenswrapper[4823]: I0126 16:11:37.444847 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d79dec0ed1fa129042b66a0b5a73ae8d685de65c763fad5691a8ccbaebfce8"} err="failed to get container status \"d9d79dec0ed1fa129042b66a0b5a73ae8d685de65c763fad5691a8ccbaebfce8\": rpc error: code = NotFound desc = could not find container \"d9d79dec0ed1fa129042b66a0b5a73ae8d685de65c763fad5691a8ccbaebfce8\": container with ID starting with d9d79dec0ed1fa129042b66a0b5a73ae8d685de65c763fad5691a8ccbaebfce8 not found: ID does not exist" Jan 26 16:11:37 crc kubenswrapper[4823]: I0126 16:11:37.444873 4823 scope.go:117] "RemoveContainer" containerID="5ff87a14d09c1fc4a0e0354100853c42e090d054153aec32a9b7d4f716733ba8" Jan 26 16:11:37 crc kubenswrapper[4823]: E0126 16:11:37.445201 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ff87a14d09c1fc4a0e0354100853c42e090d054153aec32a9b7d4f716733ba8\": container with ID starting with 5ff87a14d09c1fc4a0e0354100853c42e090d054153aec32a9b7d4f716733ba8 not found: ID does not exist" containerID="5ff87a14d09c1fc4a0e0354100853c42e090d054153aec32a9b7d4f716733ba8" Jan 26 16:11:37 crc kubenswrapper[4823]: I0126 16:11:37.445220 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ff87a14d09c1fc4a0e0354100853c42e090d054153aec32a9b7d4f716733ba8"} err="failed to get container status \"5ff87a14d09c1fc4a0e0354100853c42e090d054153aec32a9b7d4f716733ba8\": rpc error: code = NotFound desc = could not find container \"5ff87a14d09c1fc4a0e0354100853c42e090d054153aec32a9b7d4f716733ba8\": container with ID starting with 5ff87a14d09c1fc4a0e0354100853c42e090d054153aec32a9b7d4f716733ba8 not found: ID does not exist" Jan 26 16:11:37 crc kubenswrapper[4823]: I0126 16:11:37.570933 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" path="/var/lib/kubelet/pods/44cd1aea-09c6-4601-a9a1-a513e97bbdbb/volumes" Jan 26 16:12:04 crc kubenswrapper[4823]: I0126 16:12:04.508011 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:12:04 crc kubenswrapper[4823]: I0126 16:12:04.508644 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:12:04 crc kubenswrapper[4823]: I0126 16:12:04.508721 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 16:12:04 crc kubenswrapper[4823]: I0126 16:12:04.509699 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:12:04 crc kubenswrapper[4823]: I0126 16:12:04.509813 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" gracePeriod=600 Jan 26 16:12:04 crc kubenswrapper[4823]: E0126 16:12:04.632997 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:12:05 crc kubenswrapper[4823]: I0126 16:12:05.049434 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" exitCode=0 Jan 26 16:12:05 crc kubenswrapper[4823]: I0126 16:12:05.049494 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8"} Jan 26 16:12:05 crc kubenswrapper[4823]: I0126 16:12:05.049869 4823 scope.go:117] "RemoveContainer" containerID="57c744fded81824a856e0c69bb0cb1bbdf50a5f8de5439f42c6be8e3a437a1ea" Jan 26 16:12:05 crc kubenswrapper[4823]: I0126 16:12:05.050951 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:12:05 crc kubenswrapper[4823]: E0126 16:12:05.054040 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:12:19 crc kubenswrapper[4823]: I0126 16:12:19.560557 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:12:19 crc kubenswrapper[4823]: E0126 16:12:19.561803 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:12:32 crc kubenswrapper[4823]: I0126 16:12:32.559964 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:12:32 crc kubenswrapper[4823]: E0126 16:12:32.560908 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:12:43 crc kubenswrapper[4823]: I0126 16:12:43.572152 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:12:43 crc kubenswrapper[4823]: E0126 16:12:43.573356 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:12:55 crc kubenswrapper[4823]: I0126 16:12:55.567451 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:12:55 crc kubenswrapper[4823]: E0126 16:12:55.568639 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:13:08 crc kubenswrapper[4823]: I0126 16:13:08.567718 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:13:08 crc kubenswrapper[4823]: E0126 16:13:08.568348 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:13:21 crc kubenswrapper[4823]: I0126 16:13:21.560854 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:13:21 crc kubenswrapper[4823]: E0126 16:13:21.561889 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:13:33 crc kubenswrapper[4823]: I0126 16:13:33.571543 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:13:33 crc kubenswrapper[4823]: E0126 16:13:33.572811 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:13:44 crc kubenswrapper[4823]: I0126 16:13:44.560557 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:13:44 crc kubenswrapper[4823]: E0126 16:13:44.561661 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:13:58 crc kubenswrapper[4823]: I0126 16:13:58.561877 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:13:58 crc kubenswrapper[4823]: E0126 16:13:58.562944 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:14:09 crc kubenswrapper[4823]: I0126 16:14:09.560347 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:14:09 crc kubenswrapper[4823]: E0126 16:14:09.561167 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:14:24 crc kubenswrapper[4823]: I0126 16:14:24.561167 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:14:24 crc kubenswrapper[4823]: E0126 16:14:24.561949 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.302143 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-llspd"] Jan 26 16:14:34 crc kubenswrapper[4823]: E0126 16:14:34.310606 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" containerName="extract-utilities" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.310710 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" containerName="extract-utilities" Jan 26 16:14:34 crc kubenswrapper[4823]: E0126 16:14:34.310786 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" containerName="registry-server" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.310848 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" containerName="registry-server" Jan 26 16:14:34 crc kubenswrapper[4823]: E0126 16:14:34.310921 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" containerName="extract-utilities" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.310977 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" containerName="extract-utilities" Jan 26 16:14:34 crc kubenswrapper[4823]: E0126 16:14:34.311038 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" containerName="registry-server" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.311097 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" containerName="registry-server" Jan 26 16:14:34 crc kubenswrapper[4823]: E0126 16:14:34.311198 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" containerName="extract-content" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.311275 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" containerName="extract-content" Jan 26 16:14:34 crc kubenswrapper[4823]: E0126 16:14:34.311350 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" containerName="extract-content" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.311453 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" containerName="extract-content" Jan 26 16:14:34 crc kubenswrapper[4823]: E0126 16:14:34.311528 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd35b20-5080-49c3-bd39-3aeb70707b15" containerName="registry-server" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.311589 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd35b20-5080-49c3-bd39-3aeb70707b15" containerName="registry-server" Jan 26 16:14:34 crc kubenswrapper[4823]: E0126 16:14:34.311659 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd35b20-5080-49c3-bd39-3aeb70707b15" containerName="extract-utilities" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.311731 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd35b20-5080-49c3-bd39-3aeb70707b15" containerName="extract-utilities" Jan 26 16:14:34 crc kubenswrapper[4823]: E0126 16:14:34.311808 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbd35b20-5080-49c3-bd39-3aeb70707b15" containerName="extract-content" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.311862 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd35b20-5080-49c3-bd39-3aeb70707b15" containerName="extract-content" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.312105 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3f3d328-b24e-43fd-9e5d-4cbefe8f4e82" containerName="registry-server" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.312182 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbd35b20-5080-49c3-bd39-3aeb70707b15" containerName="registry-server" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.312248 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="44cd1aea-09c6-4601-a9a1-a513e97bbdbb" containerName="registry-server" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.313726 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.337499 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-llspd"] Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.501433 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stn2x\" (UniqueName: \"kubernetes.io/projected/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-kube-api-access-stn2x\") pod \"redhat-operators-llspd\" (UID: \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\") " pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.501500 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-utilities\") pod \"redhat-operators-llspd\" (UID: \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\") " pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.501800 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-catalog-content\") pod \"redhat-operators-llspd\" (UID: \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\") " pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.604152 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stn2x\" (UniqueName: \"kubernetes.io/projected/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-kube-api-access-stn2x\") pod \"redhat-operators-llspd\" (UID: \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\") " pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.604237 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-utilities\") pod \"redhat-operators-llspd\" (UID: \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\") " pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.604398 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-catalog-content\") pod \"redhat-operators-llspd\" (UID: \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\") " pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.605072 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-catalog-content\") pod \"redhat-operators-llspd\" (UID: \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\") " pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.605232 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-utilities\") pod \"redhat-operators-llspd\" (UID: \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\") " pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.647871 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stn2x\" (UniqueName: \"kubernetes.io/projected/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-kube-api-access-stn2x\") pod \"redhat-operators-llspd\" (UID: \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\") " pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:34 crc kubenswrapper[4823]: I0126 16:14:34.934508 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:35 crc kubenswrapper[4823]: I0126 16:14:35.431411 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-llspd"] Jan 26 16:14:35 crc kubenswrapper[4823]: I0126 16:14:35.567120 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:14:35 crc kubenswrapper[4823]: E0126 16:14:35.567876 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:14:35 crc kubenswrapper[4823]: I0126 16:14:35.630102 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llspd" event={"ID":"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30","Type":"ContainerStarted","Data":"84ed92f92052c3b1ddd6ee2c27a5ae70bfffe8f46017cde2f8d5311a6881f2bc"} Jan 26 16:14:36 crc kubenswrapper[4823]: I0126 16:14:36.650879 4823 generic.go:334] "Generic (PLEG): container finished" podID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerID="978b181e26a4d2fcfc6f8bebb8087b472b530ac80889713f31a9c04413268ab8" exitCode=0 Jan 26 16:14:36 crc kubenswrapper[4823]: I0126 16:14:36.651008 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llspd" event={"ID":"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30","Type":"ContainerDied","Data":"978b181e26a4d2fcfc6f8bebb8087b472b530ac80889713f31a9c04413268ab8"} Jan 26 16:14:37 crc kubenswrapper[4823]: I0126 16:14:37.662093 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llspd" event={"ID":"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30","Type":"ContainerStarted","Data":"7f0c9d9de8c3d86f36bcd05b3e5dcfc47ecfbea0928b6c27f28ed1053175134a"} Jan 26 16:14:42 crc kubenswrapper[4823]: I0126 16:14:42.705112 4823 generic.go:334] "Generic (PLEG): container finished" podID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerID="7f0c9d9de8c3d86f36bcd05b3e5dcfc47ecfbea0928b6c27f28ed1053175134a" exitCode=0 Jan 26 16:14:42 crc kubenswrapper[4823]: I0126 16:14:42.705419 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llspd" event={"ID":"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30","Type":"ContainerDied","Data":"7f0c9d9de8c3d86f36bcd05b3e5dcfc47ecfbea0928b6c27f28ed1053175134a"} Jan 26 16:14:43 crc kubenswrapper[4823]: I0126 16:14:43.715725 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llspd" event={"ID":"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30","Type":"ContainerStarted","Data":"0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566"} Jan 26 16:14:43 crc kubenswrapper[4823]: I0126 16:14:43.737755 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-llspd" podStartSLOduration=3.305131489 podStartE2EDuration="9.737737288s" podCreationTimestamp="2026-01-26 16:14:34 +0000 UTC" firstStartedPulling="2026-01-26 16:14:36.653044097 +0000 UTC m=+5273.338507202" lastFinishedPulling="2026-01-26 16:14:43.085649896 +0000 UTC m=+5279.771113001" observedRunningTime="2026-01-26 16:14:43.731001635 +0000 UTC m=+5280.416464750" watchObservedRunningTime="2026-01-26 16:14:43.737737288 +0000 UTC m=+5280.423200393" Jan 26 16:14:44 crc kubenswrapper[4823]: I0126 16:14:44.935801 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:44 crc kubenswrapper[4823]: I0126 16:14:44.936114 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:14:45 crc kubenswrapper[4823]: I0126 16:14:45.984950 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-llspd" podUID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerName="registry-server" probeResult="failure" output=< Jan 26 16:14:45 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 16:14:45 crc kubenswrapper[4823]: > Jan 26 16:14:49 crc kubenswrapper[4823]: I0126 16:14:49.560329 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:14:49 crc kubenswrapper[4823]: E0126 16:14:49.561168 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:14:55 crc kubenswrapper[4823]: I0126 16:14:55.984506 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-llspd" podUID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerName="registry-server" probeResult="failure" output=< Jan 26 16:14:55 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 16:14:55 crc kubenswrapper[4823]: > Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.149796 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt"] Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.152291 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.164832 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.171257 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.182952 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt"] Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.345394 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-config-volume\") pod \"collect-profiles-29490735-l88wt\" (UID: \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.345601 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-secret-volume\") pod \"collect-profiles-29490735-l88wt\" (UID: \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.345644 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cptz\" (UniqueName: \"kubernetes.io/projected/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-kube-api-access-4cptz\") pod \"collect-profiles-29490735-l88wt\" (UID: \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.447257 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-secret-volume\") pod \"collect-profiles-29490735-l88wt\" (UID: \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.447321 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cptz\" (UniqueName: \"kubernetes.io/projected/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-kube-api-access-4cptz\") pod \"collect-profiles-29490735-l88wt\" (UID: \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.447396 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-config-volume\") pod \"collect-profiles-29490735-l88wt\" (UID: \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.448208 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-config-volume\") pod \"collect-profiles-29490735-l88wt\" (UID: \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.457221 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-secret-volume\") pod \"collect-profiles-29490735-l88wt\" (UID: \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.469171 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cptz\" (UniqueName: \"kubernetes.io/projected/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-kube-api-access-4cptz\") pod \"collect-profiles-29490735-l88wt\" (UID: \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.498281 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.560741 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:15:00 crc kubenswrapper[4823]: E0126 16:15:00.561302 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:15:00 crc kubenswrapper[4823]: I0126 16:15:00.990082 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt"] Jan 26 16:15:01 crc kubenswrapper[4823]: I0126 16:15:01.862156 4823 generic.go:334] "Generic (PLEG): container finished" podID="7b2f3ca7-003f-48a8-afc6-a13f665b3c97" containerID="86c5def06e50035c332043a92a14a761c00a20443be9de213cb39bfad7ac0ff6" exitCode=0 Jan 26 16:15:01 crc kubenswrapper[4823]: I0126 16:15:01.862213 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" event={"ID":"7b2f3ca7-003f-48a8-afc6-a13f665b3c97","Type":"ContainerDied","Data":"86c5def06e50035c332043a92a14a761c00a20443be9de213cb39bfad7ac0ff6"} Jan 26 16:15:01 crc kubenswrapper[4823]: I0126 16:15:01.862750 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" event={"ID":"7b2f3ca7-003f-48a8-afc6-a13f665b3c97","Type":"ContainerStarted","Data":"aadb98d14cd0ede9879d6ac596947e8e5adbe5c972b5f8eeb489b59330e097c1"} Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.314507 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.506999 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cptz\" (UniqueName: \"kubernetes.io/projected/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-kube-api-access-4cptz\") pod \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\" (UID: \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\") " Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.507133 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-secret-volume\") pod \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\" (UID: \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\") " Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.507418 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-config-volume\") pod \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\" (UID: \"7b2f3ca7-003f-48a8-afc6-a13f665b3c97\") " Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.508741 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-config-volume" (OuterVolumeSpecName: "config-volume") pod "7b2f3ca7-003f-48a8-afc6-a13f665b3c97" (UID: "7b2f3ca7-003f-48a8-afc6-a13f665b3c97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.514291 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-kube-api-access-4cptz" (OuterVolumeSpecName: "kube-api-access-4cptz") pod "7b2f3ca7-003f-48a8-afc6-a13f665b3c97" (UID: "7b2f3ca7-003f-48a8-afc6-a13f665b3c97"). InnerVolumeSpecName "kube-api-access-4cptz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.516691 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7b2f3ca7-003f-48a8-afc6-a13f665b3c97" (UID: "7b2f3ca7-003f-48a8-afc6-a13f665b3c97"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.609376 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.609624 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cptz\" (UniqueName: \"kubernetes.io/projected/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-kube-api-access-4cptz\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.609705 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7b2f3ca7-003f-48a8-afc6-a13f665b3c97-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.880237 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" event={"ID":"7b2f3ca7-003f-48a8-afc6-a13f665b3c97","Type":"ContainerDied","Data":"aadb98d14cd0ede9879d6ac596947e8e5adbe5c972b5f8eeb489b59330e097c1"} Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.880567 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aadb98d14cd0ede9879d6ac596947e8e5adbe5c972b5f8eeb489b59330e097c1" Jan 26 16:15:03 crc kubenswrapper[4823]: I0126 16:15:03.880286 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt" Jan 26 16:15:04 crc kubenswrapper[4823]: I0126 16:15:04.405155 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d"] Jan 26 16:15:04 crc kubenswrapper[4823]: I0126 16:15:04.416258 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490690-4jm9d"] Jan 26 16:15:04 crc kubenswrapper[4823]: I0126 16:15:04.981840 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:15:05 crc kubenswrapper[4823]: I0126 16:15:05.032268 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:15:05 crc kubenswrapper[4823]: I0126 16:15:05.505617 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-llspd"] Jan 26 16:15:05 crc kubenswrapper[4823]: I0126 16:15:05.579606 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2af0e3cf-a6b5-4d9e-9077-a14b5dae054b" path="/var/lib/kubelet/pods/2af0e3cf-a6b5-4d9e-9077-a14b5dae054b/volumes" Jan 26 16:15:06 crc kubenswrapper[4823]: I0126 16:15:06.903097 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-llspd" podUID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerName="registry-server" containerID="cri-o://0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566" gracePeriod=2 Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.465459 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.600156 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stn2x\" (UniqueName: \"kubernetes.io/projected/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-kube-api-access-stn2x\") pod \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\" (UID: \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\") " Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.600822 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-catalog-content\") pod \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\" (UID: \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\") " Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.600920 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-utilities\") pod \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\" (UID: \"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30\") " Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.601655 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-utilities" (OuterVolumeSpecName: "utilities") pod "7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" (UID: "7c3827ff-f8d1-46d3-ae32-04e6e7c69d30"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.601995 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.606561 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-kube-api-access-stn2x" (OuterVolumeSpecName: "kube-api-access-stn2x") pod "7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" (UID: "7c3827ff-f8d1-46d3-ae32-04e6e7c69d30"). InnerVolumeSpecName "kube-api-access-stn2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.702928 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stn2x\" (UniqueName: \"kubernetes.io/projected/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-kube-api-access-stn2x\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.726629 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" (UID: "7c3827ff-f8d1-46d3-ae32-04e6e7c69d30"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.804697 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.912575 4823 generic.go:334] "Generic (PLEG): container finished" podID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerID="0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566" exitCode=0 Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.912622 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llspd" event={"ID":"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30","Type":"ContainerDied","Data":"0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566"} Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.912649 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-llspd" event={"ID":"7c3827ff-f8d1-46d3-ae32-04e6e7c69d30","Type":"ContainerDied","Data":"84ed92f92052c3b1ddd6ee2c27a5ae70bfffe8f46017cde2f8d5311a6881f2bc"} Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.912671 4823 scope.go:117] "RemoveContainer" containerID="0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566" Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.912675 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-llspd" Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.943425 4823 scope.go:117] "RemoveContainer" containerID="7f0c9d9de8c3d86f36bcd05b3e5dcfc47ecfbea0928b6c27f28ed1053175134a" Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.968280 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-llspd"] Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.989463 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-llspd"] Jan 26 16:15:07 crc kubenswrapper[4823]: I0126 16:15:07.997079 4823 scope.go:117] "RemoveContainer" containerID="978b181e26a4d2fcfc6f8bebb8087b472b530ac80889713f31a9c04413268ab8" Jan 26 16:15:08 crc kubenswrapper[4823]: I0126 16:15:08.021167 4823 scope.go:117] "RemoveContainer" containerID="0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566" Jan 26 16:15:08 crc kubenswrapper[4823]: E0126 16:15:08.021686 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566\": container with ID starting with 0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566 not found: ID does not exist" containerID="0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566" Jan 26 16:15:08 crc kubenswrapper[4823]: I0126 16:15:08.021727 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566"} err="failed to get container status \"0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566\": rpc error: code = NotFound desc = could not find container \"0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566\": container with ID starting with 0e4e09689087f89efdbe8c652ccf90bb5f67c5a5683e28b7fe92e5d99418d566 not found: ID does not exist" Jan 26 16:15:08 crc kubenswrapper[4823]: I0126 16:15:08.021753 4823 scope.go:117] "RemoveContainer" containerID="7f0c9d9de8c3d86f36bcd05b3e5dcfc47ecfbea0928b6c27f28ed1053175134a" Jan 26 16:15:08 crc kubenswrapper[4823]: E0126 16:15:08.022145 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f0c9d9de8c3d86f36bcd05b3e5dcfc47ecfbea0928b6c27f28ed1053175134a\": container with ID starting with 7f0c9d9de8c3d86f36bcd05b3e5dcfc47ecfbea0928b6c27f28ed1053175134a not found: ID does not exist" containerID="7f0c9d9de8c3d86f36bcd05b3e5dcfc47ecfbea0928b6c27f28ed1053175134a" Jan 26 16:15:08 crc kubenswrapper[4823]: I0126 16:15:08.022193 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f0c9d9de8c3d86f36bcd05b3e5dcfc47ecfbea0928b6c27f28ed1053175134a"} err="failed to get container status \"7f0c9d9de8c3d86f36bcd05b3e5dcfc47ecfbea0928b6c27f28ed1053175134a\": rpc error: code = NotFound desc = could not find container \"7f0c9d9de8c3d86f36bcd05b3e5dcfc47ecfbea0928b6c27f28ed1053175134a\": container with ID starting with 7f0c9d9de8c3d86f36bcd05b3e5dcfc47ecfbea0928b6c27f28ed1053175134a not found: ID does not exist" Jan 26 16:15:08 crc kubenswrapper[4823]: I0126 16:15:08.022228 4823 scope.go:117] "RemoveContainer" containerID="978b181e26a4d2fcfc6f8bebb8087b472b530ac80889713f31a9c04413268ab8" Jan 26 16:15:08 crc kubenswrapper[4823]: E0126 16:15:08.022531 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"978b181e26a4d2fcfc6f8bebb8087b472b530ac80889713f31a9c04413268ab8\": container with ID starting with 978b181e26a4d2fcfc6f8bebb8087b472b530ac80889713f31a9c04413268ab8 not found: ID does not exist" containerID="978b181e26a4d2fcfc6f8bebb8087b472b530ac80889713f31a9c04413268ab8" Jan 26 16:15:08 crc kubenswrapper[4823]: I0126 16:15:08.022561 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"978b181e26a4d2fcfc6f8bebb8087b472b530ac80889713f31a9c04413268ab8"} err="failed to get container status \"978b181e26a4d2fcfc6f8bebb8087b472b530ac80889713f31a9c04413268ab8\": rpc error: code = NotFound desc = could not find container \"978b181e26a4d2fcfc6f8bebb8087b472b530ac80889713f31a9c04413268ab8\": container with ID starting with 978b181e26a4d2fcfc6f8bebb8087b472b530ac80889713f31a9c04413268ab8 not found: ID does not exist" Jan 26 16:15:08 crc kubenswrapper[4823]: I0126 16:15:08.241475 4823 scope.go:117] "RemoveContainer" containerID="1fddaa5cdc847a20258746f67aa957ab544ed88413fb8df68dfbb9f17a23e4fe" Jan 26 16:15:09 crc kubenswrapper[4823]: I0126 16:15:09.575875 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" path="/var/lib/kubelet/pods/7c3827ff-f8d1-46d3-ae32-04e6e7c69d30/volumes" Jan 26 16:15:15 crc kubenswrapper[4823]: I0126 16:15:15.560316 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:15:15 crc kubenswrapper[4823]: E0126 16:15:15.561050 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:15:30 crc kubenswrapper[4823]: I0126 16:15:30.561317 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:15:30 crc kubenswrapper[4823]: E0126 16:15:30.562286 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:15:41 crc kubenswrapper[4823]: I0126 16:15:41.561444 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:15:41 crc kubenswrapper[4823]: E0126 16:15:41.562017 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:15:56 crc kubenswrapper[4823]: I0126 16:15:56.560103 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:15:56 crc kubenswrapper[4823]: E0126 16:15:56.560835 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:16:10 crc kubenswrapper[4823]: I0126 16:16:10.560842 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:16:10 crc kubenswrapper[4823]: E0126 16:16:10.561660 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:16:24 crc kubenswrapper[4823]: I0126 16:16:24.562113 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:16:24 crc kubenswrapper[4823]: E0126 16:16:24.563021 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:16:38 crc kubenswrapper[4823]: I0126 16:16:38.561281 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:16:38 crc kubenswrapper[4823]: E0126 16:16:38.562113 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:16:53 crc kubenswrapper[4823]: I0126 16:16:53.567269 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:16:53 crc kubenswrapper[4823]: E0126 16:16:53.568184 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:17:04 crc kubenswrapper[4823]: I0126 16:17:04.560398 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:17:05 crc kubenswrapper[4823]: I0126 16:17:05.005795 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"78a9744789a5529fe8db8b359bd502d37d4703f26bfec8dd43bbf8611b862ea7"} Jan 26 16:19:04 crc kubenswrapper[4823]: I0126 16:19:04.508017 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:19:04 crc kubenswrapper[4823]: I0126 16:19:04.508626 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:19:34 crc kubenswrapper[4823]: I0126 16:19:34.508579 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:19:34 crc kubenswrapper[4823]: I0126 16:19:34.510666 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:20:04 crc kubenswrapper[4823]: I0126 16:20:04.508765 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:20:04 crc kubenswrapper[4823]: I0126 16:20:04.509247 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:20:04 crc kubenswrapper[4823]: I0126 16:20:04.509298 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 16:20:04 crc kubenswrapper[4823]: I0126 16:20:04.510211 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78a9744789a5529fe8db8b359bd502d37d4703f26bfec8dd43bbf8611b862ea7"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:20:04 crc kubenswrapper[4823]: I0126 16:20:04.510280 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://78a9744789a5529fe8db8b359bd502d37d4703f26bfec8dd43bbf8611b862ea7" gracePeriod=600 Jan 26 16:20:05 crc kubenswrapper[4823]: I0126 16:20:05.488010 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="78a9744789a5529fe8db8b359bd502d37d4703f26bfec8dd43bbf8611b862ea7" exitCode=0 Jan 26 16:20:05 crc kubenswrapper[4823]: I0126 16:20:05.488069 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"78a9744789a5529fe8db8b359bd502d37d4703f26bfec8dd43bbf8611b862ea7"} Jan 26 16:20:05 crc kubenswrapper[4823]: I0126 16:20:05.488401 4823 scope.go:117] "RemoveContainer" containerID="e73f94fb5e93b708997b50b323c7a88a9af0e965d823347f2105425f73336bf8" Jan 26 16:20:06 crc kubenswrapper[4823]: I0126 16:20:06.500250 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f"} Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.219412 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pzpm4"] Jan 26 16:22:03 crc kubenswrapper[4823]: E0126 16:22:03.220257 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerName="registry-server" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.220271 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerName="registry-server" Jan 26 16:22:03 crc kubenswrapper[4823]: E0126 16:22:03.220287 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerName="extract-utilities" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.220293 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerName="extract-utilities" Jan 26 16:22:03 crc kubenswrapper[4823]: E0126 16:22:03.220306 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerName="extract-content" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.220312 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerName="extract-content" Jan 26 16:22:03 crc kubenswrapper[4823]: E0126 16:22:03.220320 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b2f3ca7-003f-48a8-afc6-a13f665b3c97" containerName="collect-profiles" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.220326 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b2f3ca7-003f-48a8-afc6-a13f665b3c97" containerName="collect-profiles" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.220524 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b2f3ca7-003f-48a8-afc6-a13f665b3c97" containerName="collect-profiles" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.220542 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c3827ff-f8d1-46d3-ae32-04e6e7c69d30" containerName="registry-server" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.221864 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.235940 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pzpm4"] Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.269942 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c6991d-8566-42eb-ba23-b4ab50af8805-utilities\") pod \"certified-operators-pzpm4\" (UID: \"37c6991d-8566-42eb-ba23-b4ab50af8805\") " pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.270129 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c6991d-8566-42eb-ba23-b4ab50af8805-catalog-content\") pod \"certified-operators-pzpm4\" (UID: \"37c6991d-8566-42eb-ba23-b4ab50af8805\") " pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.270172 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbmmx\" (UniqueName: \"kubernetes.io/projected/37c6991d-8566-42eb-ba23-b4ab50af8805-kube-api-access-dbmmx\") pod \"certified-operators-pzpm4\" (UID: \"37c6991d-8566-42eb-ba23-b4ab50af8805\") " pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.371577 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c6991d-8566-42eb-ba23-b4ab50af8805-utilities\") pod \"certified-operators-pzpm4\" (UID: \"37c6991d-8566-42eb-ba23-b4ab50af8805\") " pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.371673 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c6991d-8566-42eb-ba23-b4ab50af8805-catalog-content\") pod \"certified-operators-pzpm4\" (UID: \"37c6991d-8566-42eb-ba23-b4ab50af8805\") " pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.371691 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbmmx\" (UniqueName: \"kubernetes.io/projected/37c6991d-8566-42eb-ba23-b4ab50af8805-kube-api-access-dbmmx\") pod \"certified-operators-pzpm4\" (UID: \"37c6991d-8566-42eb-ba23-b4ab50af8805\") " pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.372315 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c6991d-8566-42eb-ba23-b4ab50af8805-utilities\") pod \"certified-operators-pzpm4\" (UID: \"37c6991d-8566-42eb-ba23-b4ab50af8805\") " pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.372556 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c6991d-8566-42eb-ba23-b4ab50af8805-catalog-content\") pod \"certified-operators-pzpm4\" (UID: \"37c6991d-8566-42eb-ba23-b4ab50af8805\") " pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.408196 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbmmx\" (UniqueName: \"kubernetes.io/projected/37c6991d-8566-42eb-ba23-b4ab50af8805-kube-api-access-dbmmx\") pod \"certified-operators-pzpm4\" (UID: \"37c6991d-8566-42eb-ba23-b4ab50af8805\") " pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:03 crc kubenswrapper[4823]: I0126 16:22:03.542909 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:04 crc kubenswrapper[4823]: I0126 16:22:04.051428 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pzpm4"] Jan 26 16:22:04 crc kubenswrapper[4823]: I0126 16:22:04.561504 4823 generic.go:334] "Generic (PLEG): container finished" podID="37c6991d-8566-42eb-ba23-b4ab50af8805" containerID="8a128bba8d1382b32b7c7a2e03cc8ab277e93c57575b1290c8736b330d620c24" exitCode=0 Jan 26 16:22:04 crc kubenswrapper[4823]: I0126 16:22:04.561558 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzpm4" event={"ID":"37c6991d-8566-42eb-ba23-b4ab50af8805","Type":"ContainerDied","Data":"8a128bba8d1382b32b7c7a2e03cc8ab277e93c57575b1290c8736b330d620c24"} Jan 26 16:22:04 crc kubenswrapper[4823]: I0126 16:22:04.562061 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzpm4" event={"ID":"37c6991d-8566-42eb-ba23-b4ab50af8805","Type":"ContainerStarted","Data":"6463e16d679e6d5b1a792bb146cbe5050bf8617dbe4291144318ddaf5fd31b7b"} Jan 26 16:22:04 crc kubenswrapper[4823]: I0126 16:22:04.564682 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:22:05 crc kubenswrapper[4823]: I0126 16:22:05.572789 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzpm4" event={"ID":"37c6991d-8566-42eb-ba23-b4ab50af8805","Type":"ContainerStarted","Data":"ebfc7221054a1dcf8b9ef75e6ceab17a9c5175c319448813578c8fe7fea539a7"} Jan 26 16:22:06 crc kubenswrapper[4823]: I0126 16:22:06.589553 4823 generic.go:334] "Generic (PLEG): container finished" podID="37c6991d-8566-42eb-ba23-b4ab50af8805" containerID="ebfc7221054a1dcf8b9ef75e6ceab17a9c5175c319448813578c8fe7fea539a7" exitCode=0 Jan 26 16:22:06 crc kubenswrapper[4823]: I0126 16:22:06.589664 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzpm4" event={"ID":"37c6991d-8566-42eb-ba23-b4ab50af8805","Type":"ContainerDied","Data":"ebfc7221054a1dcf8b9ef75e6ceab17a9c5175c319448813578c8fe7fea539a7"} Jan 26 16:22:07 crc kubenswrapper[4823]: I0126 16:22:07.608408 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzpm4" event={"ID":"37c6991d-8566-42eb-ba23-b4ab50af8805","Type":"ContainerStarted","Data":"c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef"} Jan 26 16:22:07 crc kubenswrapper[4823]: I0126 16:22:07.625049 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pzpm4" podStartSLOduration=1.898369272 podStartE2EDuration="4.625029382s" podCreationTimestamp="2026-01-26 16:22:03 +0000 UTC" firstStartedPulling="2026-01-26 16:22:04.564437417 +0000 UTC m=+5721.249900512" lastFinishedPulling="2026-01-26 16:22:07.291097517 +0000 UTC m=+5723.976560622" observedRunningTime="2026-01-26 16:22:07.624209339 +0000 UTC m=+5724.309672444" watchObservedRunningTime="2026-01-26 16:22:07.625029382 +0000 UTC m=+5724.310492497" Jan 26 16:22:13 crc kubenswrapper[4823]: I0126 16:22:13.543810 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:13 crc kubenswrapper[4823]: I0126 16:22:13.544584 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:13 crc kubenswrapper[4823]: I0126 16:22:13.593879 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:13 crc kubenswrapper[4823]: I0126 16:22:13.703144 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:13 crc kubenswrapper[4823]: I0126 16:22:13.840176 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pzpm4"] Jan 26 16:22:15 crc kubenswrapper[4823]: I0126 16:22:15.691100 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pzpm4" podUID="37c6991d-8566-42eb-ba23-b4ab50af8805" containerName="registry-server" containerID="cri-o://c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef" gracePeriod=2 Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.258112 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.341997 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c6991d-8566-42eb-ba23-b4ab50af8805-utilities\") pod \"37c6991d-8566-42eb-ba23-b4ab50af8805\" (UID: \"37c6991d-8566-42eb-ba23-b4ab50af8805\") " Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.342234 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c6991d-8566-42eb-ba23-b4ab50af8805-catalog-content\") pod \"37c6991d-8566-42eb-ba23-b4ab50af8805\" (UID: \"37c6991d-8566-42eb-ba23-b4ab50af8805\") " Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.342319 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbmmx\" (UniqueName: \"kubernetes.io/projected/37c6991d-8566-42eb-ba23-b4ab50af8805-kube-api-access-dbmmx\") pod \"37c6991d-8566-42eb-ba23-b4ab50af8805\" (UID: \"37c6991d-8566-42eb-ba23-b4ab50af8805\") " Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.343541 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c6991d-8566-42eb-ba23-b4ab50af8805-utilities" (OuterVolumeSpecName: "utilities") pod "37c6991d-8566-42eb-ba23-b4ab50af8805" (UID: "37c6991d-8566-42eb-ba23-b4ab50af8805"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.344144 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37c6991d-8566-42eb-ba23-b4ab50af8805-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.351814 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c6991d-8566-42eb-ba23-b4ab50af8805-kube-api-access-dbmmx" (OuterVolumeSpecName: "kube-api-access-dbmmx") pod "37c6991d-8566-42eb-ba23-b4ab50af8805" (UID: "37c6991d-8566-42eb-ba23-b4ab50af8805"). InnerVolumeSpecName "kube-api-access-dbmmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.401900 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c6991d-8566-42eb-ba23-b4ab50af8805-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37c6991d-8566-42eb-ba23-b4ab50af8805" (UID: "37c6991d-8566-42eb-ba23-b4ab50af8805"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.446058 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37c6991d-8566-42eb-ba23-b4ab50af8805-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.446131 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbmmx\" (UniqueName: \"kubernetes.io/projected/37c6991d-8566-42eb-ba23-b4ab50af8805-kube-api-access-dbmmx\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.702868 4823 generic.go:334] "Generic (PLEG): container finished" podID="37c6991d-8566-42eb-ba23-b4ab50af8805" containerID="c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef" exitCode=0 Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.702934 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzpm4" event={"ID":"37c6991d-8566-42eb-ba23-b4ab50af8805","Type":"ContainerDied","Data":"c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef"} Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.703260 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzpm4" event={"ID":"37c6991d-8566-42eb-ba23-b4ab50af8805","Type":"ContainerDied","Data":"6463e16d679e6d5b1a792bb146cbe5050bf8617dbe4291144318ddaf5fd31b7b"} Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.703004 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzpm4" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.703292 4823 scope.go:117] "RemoveContainer" containerID="c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.732393 4823 scope.go:117] "RemoveContainer" containerID="ebfc7221054a1dcf8b9ef75e6ceab17a9c5175c319448813578c8fe7fea539a7" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.752477 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pzpm4"] Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.762073 4823 scope.go:117] "RemoveContainer" containerID="8a128bba8d1382b32b7c7a2e03cc8ab277e93c57575b1290c8736b330d620c24" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.796566 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pzpm4"] Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.816160 4823 scope.go:117] "RemoveContainer" containerID="c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef" Jan 26 16:22:16 crc kubenswrapper[4823]: E0126 16:22:16.816485 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef\": container with ID starting with c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef not found: ID does not exist" containerID="c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.816521 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef"} err="failed to get container status \"c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef\": rpc error: code = NotFound desc = could not find container \"c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef\": container with ID starting with c44e4fa67108ed7af0cf2b95bad3117810e83c02dfe51920b17d01e113c37bef not found: ID does not exist" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.816555 4823 scope.go:117] "RemoveContainer" containerID="ebfc7221054a1dcf8b9ef75e6ceab17a9c5175c319448813578c8fe7fea539a7" Jan 26 16:22:16 crc kubenswrapper[4823]: E0126 16:22:16.816950 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebfc7221054a1dcf8b9ef75e6ceab17a9c5175c319448813578c8fe7fea539a7\": container with ID starting with ebfc7221054a1dcf8b9ef75e6ceab17a9c5175c319448813578c8fe7fea539a7 not found: ID does not exist" containerID="ebfc7221054a1dcf8b9ef75e6ceab17a9c5175c319448813578c8fe7fea539a7" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.816975 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebfc7221054a1dcf8b9ef75e6ceab17a9c5175c319448813578c8fe7fea539a7"} err="failed to get container status \"ebfc7221054a1dcf8b9ef75e6ceab17a9c5175c319448813578c8fe7fea539a7\": rpc error: code = NotFound desc = could not find container \"ebfc7221054a1dcf8b9ef75e6ceab17a9c5175c319448813578c8fe7fea539a7\": container with ID starting with ebfc7221054a1dcf8b9ef75e6ceab17a9c5175c319448813578c8fe7fea539a7 not found: ID does not exist" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.816997 4823 scope.go:117] "RemoveContainer" containerID="8a128bba8d1382b32b7c7a2e03cc8ab277e93c57575b1290c8736b330d620c24" Jan 26 16:22:16 crc kubenswrapper[4823]: E0126 16:22:16.817271 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a128bba8d1382b32b7c7a2e03cc8ab277e93c57575b1290c8736b330d620c24\": container with ID starting with 8a128bba8d1382b32b7c7a2e03cc8ab277e93c57575b1290c8736b330d620c24 not found: ID does not exist" containerID="8a128bba8d1382b32b7c7a2e03cc8ab277e93c57575b1290c8736b330d620c24" Jan 26 16:22:16 crc kubenswrapper[4823]: I0126 16:22:16.817294 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a128bba8d1382b32b7c7a2e03cc8ab277e93c57575b1290c8736b330d620c24"} err="failed to get container status \"8a128bba8d1382b32b7c7a2e03cc8ab277e93c57575b1290c8736b330d620c24\": rpc error: code = NotFound desc = could not find container \"8a128bba8d1382b32b7c7a2e03cc8ab277e93c57575b1290c8736b330d620c24\": container with ID starting with 8a128bba8d1382b32b7c7a2e03cc8ab277e93c57575b1290c8736b330d620c24 not found: ID does not exist" Jan 26 16:22:17 crc kubenswrapper[4823]: I0126 16:22:17.572773 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37c6991d-8566-42eb-ba23-b4ab50af8805" path="/var/lib/kubelet/pods/37c6991d-8566-42eb-ba23-b4ab50af8805/volumes" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.507831 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.508270 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.623452 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h7x42"] Jan 26 16:22:34 crc kubenswrapper[4823]: E0126 16:22:34.623917 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c6991d-8566-42eb-ba23-b4ab50af8805" containerName="registry-server" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.623937 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c6991d-8566-42eb-ba23-b4ab50af8805" containerName="registry-server" Jan 26 16:22:34 crc kubenswrapper[4823]: E0126 16:22:34.623959 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c6991d-8566-42eb-ba23-b4ab50af8805" containerName="extract-utilities" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.623967 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c6991d-8566-42eb-ba23-b4ab50af8805" containerName="extract-utilities" Jan 26 16:22:34 crc kubenswrapper[4823]: E0126 16:22:34.623979 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c6991d-8566-42eb-ba23-b4ab50af8805" containerName="extract-content" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.623987 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c6991d-8566-42eb-ba23-b4ab50af8805" containerName="extract-content" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.624240 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c6991d-8566-42eb-ba23-b4ab50af8805" containerName="registry-server" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.626013 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.647340 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7x42"] Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.815870 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5093cc-d2c0-4839-a8c8-792d6c44809b-catalog-content\") pod \"community-operators-h7x42\" (UID: \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\") " pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.815968 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5093cc-d2c0-4839-a8c8-792d6c44809b-utilities\") pod \"community-operators-h7x42\" (UID: \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\") " pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.816125 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnzt2\" (UniqueName: \"kubernetes.io/projected/0e5093cc-d2c0-4839-a8c8-792d6c44809b-kube-api-access-mnzt2\") pod \"community-operators-h7x42\" (UID: \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\") " pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.917396 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnzt2\" (UniqueName: \"kubernetes.io/projected/0e5093cc-d2c0-4839-a8c8-792d6c44809b-kube-api-access-mnzt2\") pod \"community-operators-h7x42\" (UID: \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\") " pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.917515 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5093cc-d2c0-4839-a8c8-792d6c44809b-catalog-content\") pod \"community-operators-h7x42\" (UID: \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\") " pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.917554 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5093cc-d2c0-4839-a8c8-792d6c44809b-utilities\") pod \"community-operators-h7x42\" (UID: \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\") " pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.918119 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5093cc-d2c0-4839-a8c8-792d6c44809b-utilities\") pod \"community-operators-h7x42\" (UID: \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\") " pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.918647 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5093cc-d2c0-4839-a8c8-792d6c44809b-catalog-content\") pod \"community-operators-h7x42\" (UID: \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\") " pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.936770 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnzt2\" (UniqueName: \"kubernetes.io/projected/0e5093cc-d2c0-4839-a8c8-792d6c44809b-kube-api-access-mnzt2\") pod \"community-operators-h7x42\" (UID: \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\") " pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:34 crc kubenswrapper[4823]: I0126 16:22:34.971732 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:35 crc kubenswrapper[4823]: W0126 16:22:35.508263 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e5093cc_d2c0_4839_a8c8_792d6c44809b.slice/crio-3295ad9cd815e9573490b015a41a5e6ac40cdb06a800dbf16eb3fb2d224f01b2 WatchSource:0}: Error finding container 3295ad9cd815e9573490b015a41a5e6ac40cdb06a800dbf16eb3fb2d224f01b2: Status 404 returned error can't find the container with id 3295ad9cd815e9573490b015a41a5e6ac40cdb06a800dbf16eb3fb2d224f01b2 Jan 26 16:22:35 crc kubenswrapper[4823]: I0126 16:22:35.517319 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h7x42"] Jan 26 16:22:35 crc kubenswrapper[4823]: I0126 16:22:35.893778 4823 generic.go:334] "Generic (PLEG): container finished" podID="0e5093cc-d2c0-4839-a8c8-792d6c44809b" containerID="3657c5006437093302dbd09b924cb311596cebd6010e8eecdd96586c9c10e07a" exitCode=0 Jan 26 16:22:35 crc kubenswrapper[4823]: I0126 16:22:35.893830 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7x42" event={"ID":"0e5093cc-d2c0-4839-a8c8-792d6c44809b","Type":"ContainerDied","Data":"3657c5006437093302dbd09b924cb311596cebd6010e8eecdd96586c9c10e07a"} Jan 26 16:22:35 crc kubenswrapper[4823]: I0126 16:22:35.894078 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7x42" event={"ID":"0e5093cc-d2c0-4839-a8c8-792d6c44809b","Type":"ContainerStarted","Data":"3295ad9cd815e9573490b015a41a5e6ac40cdb06a800dbf16eb3fb2d224f01b2"} Jan 26 16:22:37 crc kubenswrapper[4823]: I0126 16:22:37.913894 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7x42" event={"ID":"0e5093cc-d2c0-4839-a8c8-792d6c44809b","Type":"ContainerStarted","Data":"7b712e06cca807fe67b914785ce240c2419c28b01ab8f7062e3d46c8e6f0751d"} Jan 26 16:22:38 crc kubenswrapper[4823]: I0126 16:22:38.925827 4823 generic.go:334] "Generic (PLEG): container finished" podID="0e5093cc-d2c0-4839-a8c8-792d6c44809b" containerID="7b712e06cca807fe67b914785ce240c2419c28b01ab8f7062e3d46c8e6f0751d" exitCode=0 Jan 26 16:22:38 crc kubenswrapper[4823]: I0126 16:22:38.927168 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7x42" event={"ID":"0e5093cc-d2c0-4839-a8c8-792d6c44809b","Type":"ContainerDied","Data":"7b712e06cca807fe67b914785ce240c2419c28b01ab8f7062e3d46c8e6f0751d"} Jan 26 16:22:39 crc kubenswrapper[4823]: I0126 16:22:39.938391 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7x42" event={"ID":"0e5093cc-d2c0-4839-a8c8-792d6c44809b","Type":"ContainerStarted","Data":"076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023"} Jan 26 16:22:44 crc kubenswrapper[4823]: I0126 16:22:44.972683 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:44 crc kubenswrapper[4823]: I0126 16:22:44.973240 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:45 crc kubenswrapper[4823]: I0126 16:22:45.025578 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:45 crc kubenswrapper[4823]: I0126 16:22:45.053297 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h7x42" podStartSLOduration=7.440104537 podStartE2EDuration="11.053269667s" podCreationTimestamp="2026-01-26 16:22:34 +0000 UTC" firstStartedPulling="2026-01-26 16:22:35.895957695 +0000 UTC m=+5752.581420800" lastFinishedPulling="2026-01-26 16:22:39.509122825 +0000 UTC m=+5756.194585930" observedRunningTime="2026-01-26 16:22:39.967404621 +0000 UTC m=+5756.652867736" watchObservedRunningTime="2026-01-26 16:22:45.053269667 +0000 UTC m=+5761.738732772" Jan 26 16:22:46 crc kubenswrapper[4823]: I0126 16:22:46.029584 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:46 crc kubenswrapper[4823]: I0126 16:22:46.076295 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7x42"] Jan 26 16:22:47 crc kubenswrapper[4823]: I0126 16:22:47.998446 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h7x42" podUID="0e5093cc-d2c0-4839-a8c8-792d6c44809b" containerName="registry-server" containerID="cri-o://076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023" gracePeriod=2 Jan 26 16:22:48 crc kubenswrapper[4823]: I0126 16:22:48.626160 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:48 crc kubenswrapper[4823]: I0126 16:22:48.789761 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5093cc-d2c0-4839-a8c8-792d6c44809b-catalog-content\") pod \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\" (UID: \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\") " Jan 26 16:22:48 crc kubenswrapper[4823]: I0126 16:22:48.789830 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnzt2\" (UniqueName: \"kubernetes.io/projected/0e5093cc-d2c0-4839-a8c8-792d6c44809b-kube-api-access-mnzt2\") pod \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\" (UID: \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\") " Jan 26 16:22:48 crc kubenswrapper[4823]: I0126 16:22:48.789850 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5093cc-d2c0-4839-a8c8-792d6c44809b-utilities\") pod \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\" (UID: \"0e5093cc-d2c0-4839-a8c8-792d6c44809b\") " Jan 26 16:22:48 crc kubenswrapper[4823]: I0126 16:22:48.791236 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e5093cc-d2c0-4839-a8c8-792d6c44809b-utilities" (OuterVolumeSpecName: "utilities") pod "0e5093cc-d2c0-4839-a8c8-792d6c44809b" (UID: "0e5093cc-d2c0-4839-a8c8-792d6c44809b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:22:48 crc kubenswrapper[4823]: I0126 16:22:48.797484 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e5093cc-d2c0-4839-a8c8-792d6c44809b-kube-api-access-mnzt2" (OuterVolumeSpecName: "kube-api-access-mnzt2") pod "0e5093cc-d2c0-4839-a8c8-792d6c44809b" (UID: "0e5093cc-d2c0-4839-a8c8-792d6c44809b"). InnerVolumeSpecName "kube-api-access-mnzt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:22:48 crc kubenswrapper[4823]: I0126 16:22:48.846760 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e5093cc-d2c0-4839-a8c8-792d6c44809b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e5093cc-d2c0-4839-a8c8-792d6c44809b" (UID: "0e5093cc-d2c0-4839-a8c8-792d6c44809b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:22:48 crc kubenswrapper[4823]: I0126 16:22:48.892032 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e5093cc-d2c0-4839-a8c8-792d6c44809b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:48 crc kubenswrapper[4823]: I0126 16:22:48.892066 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnzt2\" (UniqueName: \"kubernetes.io/projected/0e5093cc-d2c0-4839-a8c8-792d6c44809b-kube-api-access-mnzt2\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:48 crc kubenswrapper[4823]: I0126 16:22:48.892080 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e5093cc-d2c0-4839-a8c8-792d6c44809b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.008160 4823 generic.go:334] "Generic (PLEG): container finished" podID="0e5093cc-d2c0-4839-a8c8-792d6c44809b" containerID="076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023" exitCode=0 Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.008209 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7x42" event={"ID":"0e5093cc-d2c0-4839-a8c8-792d6c44809b","Type":"ContainerDied","Data":"076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023"} Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.008239 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h7x42" event={"ID":"0e5093cc-d2c0-4839-a8c8-792d6c44809b","Type":"ContainerDied","Data":"3295ad9cd815e9573490b015a41a5e6ac40cdb06a800dbf16eb3fb2d224f01b2"} Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.008261 4823 scope.go:117] "RemoveContainer" containerID="076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023" Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.008440 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h7x42" Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.049266 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h7x42"] Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.056050 4823 scope.go:117] "RemoveContainer" containerID="7b712e06cca807fe67b914785ce240c2419c28b01ab8f7062e3d46c8e6f0751d" Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.062694 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h7x42"] Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.094628 4823 scope.go:117] "RemoveContainer" containerID="3657c5006437093302dbd09b924cb311596cebd6010e8eecdd96586c9c10e07a" Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.126772 4823 scope.go:117] "RemoveContainer" containerID="076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023" Jan 26 16:22:49 crc kubenswrapper[4823]: E0126 16:22:49.127154 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023\": container with ID starting with 076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023 not found: ID does not exist" containerID="076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023" Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.127199 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023"} err="failed to get container status \"076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023\": rpc error: code = NotFound desc = could not find container \"076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023\": container with ID starting with 076b7d6bc8a18156b1fdaf61e58c277a172c00834c1a150c40284e9d93254023 not found: ID does not exist" Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.127229 4823 scope.go:117] "RemoveContainer" containerID="7b712e06cca807fe67b914785ce240c2419c28b01ab8f7062e3d46c8e6f0751d" Jan 26 16:22:49 crc kubenswrapper[4823]: E0126 16:22:49.127673 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b712e06cca807fe67b914785ce240c2419c28b01ab8f7062e3d46c8e6f0751d\": container with ID starting with 7b712e06cca807fe67b914785ce240c2419c28b01ab8f7062e3d46c8e6f0751d not found: ID does not exist" containerID="7b712e06cca807fe67b914785ce240c2419c28b01ab8f7062e3d46c8e6f0751d" Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.127765 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b712e06cca807fe67b914785ce240c2419c28b01ab8f7062e3d46c8e6f0751d"} err="failed to get container status \"7b712e06cca807fe67b914785ce240c2419c28b01ab8f7062e3d46c8e6f0751d\": rpc error: code = NotFound desc = could not find container \"7b712e06cca807fe67b914785ce240c2419c28b01ab8f7062e3d46c8e6f0751d\": container with ID starting with 7b712e06cca807fe67b914785ce240c2419c28b01ab8f7062e3d46c8e6f0751d not found: ID does not exist" Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.127791 4823 scope.go:117] "RemoveContainer" containerID="3657c5006437093302dbd09b924cb311596cebd6010e8eecdd96586c9c10e07a" Jan 26 16:22:49 crc kubenswrapper[4823]: E0126 16:22:49.128216 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3657c5006437093302dbd09b924cb311596cebd6010e8eecdd96586c9c10e07a\": container with ID starting with 3657c5006437093302dbd09b924cb311596cebd6010e8eecdd96586c9c10e07a not found: ID does not exist" containerID="3657c5006437093302dbd09b924cb311596cebd6010e8eecdd96586c9c10e07a" Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.128239 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3657c5006437093302dbd09b924cb311596cebd6010e8eecdd96586c9c10e07a"} err="failed to get container status \"3657c5006437093302dbd09b924cb311596cebd6010e8eecdd96586c9c10e07a\": rpc error: code = NotFound desc = could not find container \"3657c5006437093302dbd09b924cb311596cebd6010e8eecdd96586c9c10e07a\": container with ID starting with 3657c5006437093302dbd09b924cb311596cebd6010e8eecdd96586c9c10e07a not found: ID does not exist" Jan 26 16:22:49 crc kubenswrapper[4823]: I0126 16:22:49.573282 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e5093cc-d2c0-4839-a8c8-792d6c44809b" path="/var/lib/kubelet/pods/0e5093cc-d2c0-4839-a8c8-792d6c44809b/volumes" Jan 26 16:23:04 crc kubenswrapper[4823]: I0126 16:23:04.508781 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:23:04 crc kubenswrapper[4823]: I0126 16:23:04.509351 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:23:34 crc kubenswrapper[4823]: I0126 16:23:34.508457 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:23:34 crc kubenswrapper[4823]: I0126 16:23:34.508929 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:23:34 crc kubenswrapper[4823]: I0126 16:23:34.508982 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 16:23:34 crc kubenswrapper[4823]: I0126 16:23:34.509867 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:23:34 crc kubenswrapper[4823]: I0126 16:23:34.509926 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" gracePeriod=600 Jan 26 16:23:34 crc kubenswrapper[4823]: E0126 16:23:34.636856 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:23:35 crc kubenswrapper[4823]: I0126 16:23:35.475156 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" exitCode=0 Jan 26 16:23:35 crc kubenswrapper[4823]: I0126 16:23:35.475211 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f"} Jan 26 16:23:35 crc kubenswrapper[4823]: I0126 16:23:35.475260 4823 scope.go:117] "RemoveContainer" containerID="78a9744789a5529fe8db8b359bd502d37d4703f26bfec8dd43bbf8611b862ea7" Jan 26 16:23:35 crc kubenswrapper[4823]: I0126 16:23:35.475984 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:23:35 crc kubenswrapper[4823]: E0126 16:23:35.476429 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:23:49 crc kubenswrapper[4823]: I0126 16:23:49.560527 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:23:49 crc kubenswrapper[4823]: E0126 16:23:49.561653 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:24:04 crc kubenswrapper[4823]: I0126 16:24:04.560749 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:24:04 crc kubenswrapper[4823]: E0126 16:24:04.561569 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:24:18 crc kubenswrapper[4823]: I0126 16:24:18.561056 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:24:18 crc kubenswrapper[4823]: E0126 16:24:18.561971 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:24:29 crc kubenswrapper[4823]: I0126 16:24:29.561123 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:24:29 crc kubenswrapper[4823]: E0126 16:24:29.561943 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:24:41 crc kubenswrapper[4823]: I0126 16:24:41.560781 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:24:41 crc kubenswrapper[4823]: E0126 16:24:41.562125 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:24:55 crc kubenswrapper[4823]: I0126 16:24:55.560649 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:24:55 crc kubenswrapper[4823]: E0126 16:24:55.561225 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:25:10 crc kubenswrapper[4823]: I0126 16:25:10.560886 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:25:10 crc kubenswrapper[4823]: E0126 16:25:10.561904 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:25:22 crc kubenswrapper[4823]: I0126 16:25:22.560224 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:25:22 crc kubenswrapper[4823]: E0126 16:25:22.561038 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:25:35 crc kubenswrapper[4823]: I0126 16:25:35.564162 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:25:35 crc kubenswrapper[4823]: E0126 16:25:35.565182 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:25:47 crc kubenswrapper[4823]: I0126 16:25:47.560883 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:25:47 crc kubenswrapper[4823]: E0126 16:25:47.561641 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.421387 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pzmlk"] Jan 26 16:25:48 crc kubenswrapper[4823]: E0126 16:25:48.422024 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5093cc-d2c0-4839-a8c8-792d6c44809b" containerName="registry-server" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.422043 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5093cc-d2c0-4839-a8c8-792d6c44809b" containerName="registry-server" Jan 26 16:25:48 crc kubenswrapper[4823]: E0126 16:25:48.422082 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5093cc-d2c0-4839-a8c8-792d6c44809b" containerName="extract-utilities" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.422089 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5093cc-d2c0-4839-a8c8-792d6c44809b" containerName="extract-utilities" Jan 26 16:25:48 crc kubenswrapper[4823]: E0126 16:25:48.422103 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e5093cc-d2c0-4839-a8c8-792d6c44809b" containerName="extract-content" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.422113 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e5093cc-d2c0-4839-a8c8-792d6c44809b" containerName="extract-content" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.422283 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e5093cc-d2c0-4839-a8c8-792d6c44809b" containerName="registry-server" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.423570 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.437349 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pzmlk"] Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.492437 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4e1642-3e55-43d4-8cf1-b73cd6861291-catalog-content\") pod \"redhat-operators-pzmlk\" (UID: \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\") " pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.492540 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4e1642-3e55-43d4-8cf1-b73cd6861291-utilities\") pod \"redhat-operators-pzmlk\" (UID: \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\") " pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.492673 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdck4\" (UniqueName: \"kubernetes.io/projected/9b4e1642-3e55-43d4-8cf1-b73cd6861291-kube-api-access-xdck4\") pod \"redhat-operators-pzmlk\" (UID: \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\") " pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.594667 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4e1642-3e55-43d4-8cf1-b73cd6861291-utilities\") pod \"redhat-operators-pzmlk\" (UID: \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\") " pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.594799 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdck4\" (UniqueName: \"kubernetes.io/projected/9b4e1642-3e55-43d4-8cf1-b73cd6861291-kube-api-access-xdck4\") pod \"redhat-operators-pzmlk\" (UID: \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\") " pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.594932 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4e1642-3e55-43d4-8cf1-b73cd6861291-catalog-content\") pod \"redhat-operators-pzmlk\" (UID: \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\") " pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.596505 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4e1642-3e55-43d4-8cf1-b73cd6861291-catalog-content\") pod \"redhat-operators-pzmlk\" (UID: \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\") " pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.596585 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4e1642-3e55-43d4-8cf1-b73cd6861291-utilities\") pod \"redhat-operators-pzmlk\" (UID: \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\") " pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.616896 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdck4\" (UniqueName: \"kubernetes.io/projected/9b4e1642-3e55-43d4-8cf1-b73cd6861291-kube-api-access-xdck4\") pod \"redhat-operators-pzmlk\" (UID: \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\") " pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:48 crc kubenswrapper[4823]: I0126 16:25:48.747487 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:49 crc kubenswrapper[4823]: I0126 16:25:49.218233 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pzmlk"] Jan 26 16:25:49 crc kubenswrapper[4823]: I0126 16:25:49.927616 4823 generic.go:334] "Generic (PLEG): container finished" podID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" containerID="e011cfeeefa41c7e67b2d5f342f060e246627216ec6efebad5ff44e55c115c20" exitCode=0 Jan 26 16:25:49 crc kubenswrapper[4823]: I0126 16:25:49.927666 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzmlk" event={"ID":"9b4e1642-3e55-43d4-8cf1-b73cd6861291","Type":"ContainerDied","Data":"e011cfeeefa41c7e67b2d5f342f060e246627216ec6efebad5ff44e55c115c20"} Jan 26 16:25:49 crc kubenswrapper[4823]: I0126 16:25:49.927912 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzmlk" event={"ID":"9b4e1642-3e55-43d4-8cf1-b73cd6861291","Type":"ContainerStarted","Data":"e1ff1ab166d27c621473519e6fd1a171e4e372d08d6c8749f3277aba7551f860"} Jan 26 16:25:50 crc kubenswrapper[4823]: I0126 16:25:50.942440 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzmlk" event={"ID":"9b4e1642-3e55-43d4-8cf1-b73cd6861291","Type":"ContainerStarted","Data":"7d8228c7e3100488b2be7bae1c69da997049546485f6dd85d3eb7a9f807eb4ce"} Jan 26 16:25:53 crc kubenswrapper[4823]: I0126 16:25:53.969190 4823 generic.go:334] "Generic (PLEG): container finished" podID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" containerID="7d8228c7e3100488b2be7bae1c69da997049546485f6dd85d3eb7a9f807eb4ce" exitCode=0 Jan 26 16:25:53 crc kubenswrapper[4823]: I0126 16:25:53.969248 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzmlk" event={"ID":"9b4e1642-3e55-43d4-8cf1-b73cd6861291","Type":"ContainerDied","Data":"7d8228c7e3100488b2be7bae1c69da997049546485f6dd85d3eb7a9f807eb4ce"} Jan 26 16:25:54 crc kubenswrapper[4823]: I0126 16:25:54.982474 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzmlk" event={"ID":"9b4e1642-3e55-43d4-8cf1-b73cd6861291","Type":"ContainerStarted","Data":"37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc"} Jan 26 16:25:55 crc kubenswrapper[4823]: I0126 16:25:55.001954 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pzmlk" podStartSLOduration=2.581758364 podStartE2EDuration="7.001934229s" podCreationTimestamp="2026-01-26 16:25:48 +0000 UTC" firstStartedPulling="2026-01-26 16:25:49.929674535 +0000 UTC m=+5946.615137640" lastFinishedPulling="2026-01-26 16:25:54.3498504 +0000 UTC m=+5951.035313505" observedRunningTime="2026-01-26 16:25:55.000393946 +0000 UTC m=+5951.685857071" watchObservedRunningTime="2026-01-26 16:25:55.001934229 +0000 UTC m=+5951.687397334" Jan 26 16:25:58 crc kubenswrapper[4823]: I0126 16:25:58.748867 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:58 crc kubenswrapper[4823]: I0126 16:25:58.749568 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:25:59 crc kubenswrapper[4823]: I0126 16:25:59.803026 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pzmlk" podUID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" containerName="registry-server" probeResult="failure" output=< Jan 26 16:25:59 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 16:25:59 crc kubenswrapper[4823]: > Jan 26 16:26:00 crc kubenswrapper[4823]: I0126 16:26:00.560750 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:26:00 crc kubenswrapper[4823]: E0126 16:26:00.561189 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:26:08 crc kubenswrapper[4823]: I0126 16:26:08.798751 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:26:08 crc kubenswrapper[4823]: I0126 16:26:08.856234 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:26:09 crc kubenswrapper[4823]: I0126 16:26:09.050550 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pzmlk"] Jan 26 16:26:10 crc kubenswrapper[4823]: I0126 16:26:10.098819 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pzmlk" podUID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" containerName="registry-server" containerID="cri-o://37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc" gracePeriod=2 Jan 26 16:26:10 crc kubenswrapper[4823]: I0126 16:26:10.727765 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:26:10 crc kubenswrapper[4823]: I0126 16:26:10.849941 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdck4\" (UniqueName: \"kubernetes.io/projected/9b4e1642-3e55-43d4-8cf1-b73cd6861291-kube-api-access-xdck4\") pod \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\" (UID: \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\") " Jan 26 16:26:10 crc kubenswrapper[4823]: I0126 16:26:10.850109 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4e1642-3e55-43d4-8cf1-b73cd6861291-catalog-content\") pod \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\" (UID: \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\") " Jan 26 16:26:10 crc kubenswrapper[4823]: I0126 16:26:10.850285 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4e1642-3e55-43d4-8cf1-b73cd6861291-utilities\") pod \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\" (UID: \"9b4e1642-3e55-43d4-8cf1-b73cd6861291\") " Jan 26 16:26:10 crc kubenswrapper[4823]: I0126 16:26:10.851160 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b4e1642-3e55-43d4-8cf1-b73cd6861291-utilities" (OuterVolumeSpecName: "utilities") pod "9b4e1642-3e55-43d4-8cf1-b73cd6861291" (UID: "9b4e1642-3e55-43d4-8cf1-b73cd6861291"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:26:10 crc kubenswrapper[4823]: I0126 16:26:10.859816 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4e1642-3e55-43d4-8cf1-b73cd6861291-kube-api-access-xdck4" (OuterVolumeSpecName: "kube-api-access-xdck4") pod "9b4e1642-3e55-43d4-8cf1-b73cd6861291" (UID: "9b4e1642-3e55-43d4-8cf1-b73cd6861291"). InnerVolumeSpecName "kube-api-access-xdck4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:26:10 crc kubenswrapper[4823]: I0126 16:26:10.952891 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b4e1642-3e55-43d4-8cf1-b73cd6861291-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:10 crc kubenswrapper[4823]: I0126 16:26:10.952931 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdck4\" (UniqueName: \"kubernetes.io/projected/9b4e1642-3e55-43d4-8cf1-b73cd6861291-kube-api-access-xdck4\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:10 crc kubenswrapper[4823]: I0126 16:26:10.976203 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b4e1642-3e55-43d4-8cf1-b73cd6861291-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b4e1642-3e55-43d4-8cf1-b73cd6861291" (UID: "9b4e1642-3e55-43d4-8cf1-b73cd6861291"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.054656 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b4e1642-3e55-43d4-8cf1-b73cd6861291-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.108342 4823 generic.go:334] "Generic (PLEG): container finished" podID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" containerID="37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc" exitCode=0 Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.108408 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzmlk" event={"ID":"9b4e1642-3e55-43d4-8cf1-b73cd6861291","Type":"ContainerDied","Data":"37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc"} Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.108445 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pzmlk" event={"ID":"9b4e1642-3e55-43d4-8cf1-b73cd6861291","Type":"ContainerDied","Data":"e1ff1ab166d27c621473519e6fd1a171e4e372d08d6c8749f3277aba7551f860"} Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.108463 4823 scope.go:117] "RemoveContainer" containerID="37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc" Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.108549 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pzmlk" Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.129445 4823 scope.go:117] "RemoveContainer" containerID="7d8228c7e3100488b2be7bae1c69da997049546485f6dd85d3eb7a9f807eb4ce" Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.154816 4823 scope.go:117] "RemoveContainer" containerID="e011cfeeefa41c7e67b2d5f342f060e246627216ec6efebad5ff44e55c115c20" Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.157945 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pzmlk"] Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.169691 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pzmlk"] Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.193773 4823 scope.go:117] "RemoveContainer" containerID="37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc" Jan 26 16:26:11 crc kubenswrapper[4823]: E0126 16:26:11.194186 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc\": container with ID starting with 37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc not found: ID does not exist" containerID="37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc" Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.194216 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc"} err="failed to get container status \"37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc\": rpc error: code = NotFound desc = could not find container \"37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc\": container with ID starting with 37fe19904c0df76d5ced93227b9859ee62b0eee369c2ce5f86b8cb5c1b76a0cc not found: ID does not exist" Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.194237 4823 scope.go:117] "RemoveContainer" containerID="7d8228c7e3100488b2be7bae1c69da997049546485f6dd85d3eb7a9f807eb4ce" Jan 26 16:26:11 crc kubenswrapper[4823]: E0126 16:26:11.194580 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d8228c7e3100488b2be7bae1c69da997049546485f6dd85d3eb7a9f807eb4ce\": container with ID starting with 7d8228c7e3100488b2be7bae1c69da997049546485f6dd85d3eb7a9f807eb4ce not found: ID does not exist" containerID="7d8228c7e3100488b2be7bae1c69da997049546485f6dd85d3eb7a9f807eb4ce" Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.194605 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d8228c7e3100488b2be7bae1c69da997049546485f6dd85d3eb7a9f807eb4ce"} err="failed to get container status \"7d8228c7e3100488b2be7bae1c69da997049546485f6dd85d3eb7a9f807eb4ce\": rpc error: code = NotFound desc = could not find container \"7d8228c7e3100488b2be7bae1c69da997049546485f6dd85d3eb7a9f807eb4ce\": container with ID starting with 7d8228c7e3100488b2be7bae1c69da997049546485f6dd85d3eb7a9f807eb4ce not found: ID does not exist" Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.194625 4823 scope.go:117] "RemoveContainer" containerID="e011cfeeefa41c7e67b2d5f342f060e246627216ec6efebad5ff44e55c115c20" Jan 26 16:26:11 crc kubenswrapper[4823]: E0126 16:26:11.194889 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e011cfeeefa41c7e67b2d5f342f060e246627216ec6efebad5ff44e55c115c20\": container with ID starting with e011cfeeefa41c7e67b2d5f342f060e246627216ec6efebad5ff44e55c115c20 not found: ID does not exist" containerID="e011cfeeefa41c7e67b2d5f342f060e246627216ec6efebad5ff44e55c115c20" Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.194909 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e011cfeeefa41c7e67b2d5f342f060e246627216ec6efebad5ff44e55c115c20"} err="failed to get container status \"e011cfeeefa41c7e67b2d5f342f060e246627216ec6efebad5ff44e55c115c20\": rpc error: code = NotFound desc = could not find container \"e011cfeeefa41c7e67b2d5f342f060e246627216ec6efebad5ff44e55c115c20\": container with ID starting with e011cfeeefa41c7e67b2d5f342f060e246627216ec6efebad5ff44e55c115c20 not found: ID does not exist" Jan 26 16:26:11 crc kubenswrapper[4823]: I0126 16:26:11.571161 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" path="/var/lib/kubelet/pods/9b4e1642-3e55-43d4-8cf1-b73cd6861291/volumes" Jan 26 16:26:13 crc kubenswrapper[4823]: I0126 16:26:13.570202 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:26:13 crc kubenswrapper[4823]: E0126 16:26:13.570688 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:26:25 crc kubenswrapper[4823]: I0126 16:26:25.560504 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:26:25 crc kubenswrapper[4823]: E0126 16:26:25.561311 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:26:40 crc kubenswrapper[4823]: I0126 16:26:40.560765 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:26:40 crc kubenswrapper[4823]: E0126 16:26:40.561515 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:26:51 crc kubenswrapper[4823]: I0126 16:26:51.561215 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:26:51 crc kubenswrapper[4823]: E0126 16:26:51.561962 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.030745 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bg25j"] Jan 26 16:26:57 crc kubenswrapper[4823]: E0126 16:26:57.031600 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" containerName="registry-server" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.031612 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" containerName="registry-server" Jan 26 16:26:57 crc kubenswrapper[4823]: E0126 16:26:57.031632 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" containerName="extract-content" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.031639 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" containerName="extract-content" Jan 26 16:26:57 crc kubenswrapper[4823]: E0126 16:26:57.031651 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" containerName="extract-utilities" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.031658 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" containerName="extract-utilities" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.031864 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4e1642-3e55-43d4-8cf1-b73cd6861291" containerName="registry-server" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.033126 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.041837 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bg25j"] Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.192678 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2cb1ed-2896-46a3-958b-c44bd8f31430-catalog-content\") pod \"redhat-marketplace-bg25j\" (UID: \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\") " pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.193042 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2cb1ed-2896-46a3-958b-c44bd8f31430-utilities\") pod \"redhat-marketplace-bg25j\" (UID: \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\") " pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.193074 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95hmw\" (UniqueName: \"kubernetes.io/projected/3e2cb1ed-2896-46a3-958b-c44bd8f31430-kube-api-access-95hmw\") pod \"redhat-marketplace-bg25j\" (UID: \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\") " pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.294880 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2cb1ed-2896-46a3-958b-c44bd8f31430-catalog-content\") pod \"redhat-marketplace-bg25j\" (UID: \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\") " pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.295030 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2cb1ed-2896-46a3-958b-c44bd8f31430-utilities\") pod \"redhat-marketplace-bg25j\" (UID: \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\") " pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.295073 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95hmw\" (UniqueName: \"kubernetes.io/projected/3e2cb1ed-2896-46a3-958b-c44bd8f31430-kube-api-access-95hmw\") pod \"redhat-marketplace-bg25j\" (UID: \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\") " pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.296171 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2cb1ed-2896-46a3-958b-c44bd8f31430-catalog-content\") pod \"redhat-marketplace-bg25j\" (UID: \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\") " pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.296482 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2cb1ed-2896-46a3-958b-c44bd8f31430-utilities\") pod \"redhat-marketplace-bg25j\" (UID: \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\") " pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.314835 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95hmw\" (UniqueName: \"kubernetes.io/projected/3e2cb1ed-2896-46a3-958b-c44bd8f31430-kube-api-access-95hmw\") pod \"redhat-marketplace-bg25j\" (UID: \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\") " pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.366255 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:26:57 crc kubenswrapper[4823]: I0126 16:26:57.856402 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bg25j"] Jan 26 16:26:58 crc kubenswrapper[4823]: I0126 16:26:58.525077 4823 generic.go:334] "Generic (PLEG): container finished" podID="3e2cb1ed-2896-46a3-958b-c44bd8f31430" containerID="6ed3aaafdc73ae62ff1d085aa2c532ec9f817e471b96ab889f1652f514c14371" exitCode=0 Jan 26 16:26:58 crc kubenswrapper[4823]: I0126 16:26:58.525272 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg25j" event={"ID":"3e2cb1ed-2896-46a3-958b-c44bd8f31430","Type":"ContainerDied","Data":"6ed3aaafdc73ae62ff1d085aa2c532ec9f817e471b96ab889f1652f514c14371"} Jan 26 16:26:58 crc kubenswrapper[4823]: I0126 16:26:58.525483 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg25j" event={"ID":"3e2cb1ed-2896-46a3-958b-c44bd8f31430","Type":"ContainerStarted","Data":"d00958d168a590eed4e8ee5609a9fa694b9130b0a6d00213a972b454d3b8e60d"} Jan 26 16:26:59 crc kubenswrapper[4823]: I0126 16:26:59.535767 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg25j" event={"ID":"3e2cb1ed-2896-46a3-958b-c44bd8f31430","Type":"ContainerStarted","Data":"9cb9aa3054d4c79d9cf3173282706250c5a985013d2254559a9cc947f88f8404"} Jan 26 16:27:00 crc kubenswrapper[4823]: I0126 16:27:00.544756 4823 generic.go:334] "Generic (PLEG): container finished" podID="3e2cb1ed-2896-46a3-958b-c44bd8f31430" containerID="9cb9aa3054d4c79d9cf3173282706250c5a985013d2254559a9cc947f88f8404" exitCode=0 Jan 26 16:27:00 crc kubenswrapper[4823]: I0126 16:27:00.544793 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg25j" event={"ID":"3e2cb1ed-2896-46a3-958b-c44bd8f31430","Type":"ContainerDied","Data":"9cb9aa3054d4c79d9cf3173282706250c5a985013d2254559a9cc947f88f8404"} Jan 26 16:27:01 crc kubenswrapper[4823]: I0126 16:27:01.557011 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg25j" event={"ID":"3e2cb1ed-2896-46a3-958b-c44bd8f31430","Type":"ContainerStarted","Data":"d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64"} Jan 26 16:27:01 crc kubenswrapper[4823]: I0126 16:27:01.582774 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bg25j" podStartSLOduration=2.130520174 podStartE2EDuration="4.582750526s" podCreationTimestamp="2026-01-26 16:26:57 +0000 UTC" firstStartedPulling="2026-01-26 16:26:58.527431156 +0000 UTC m=+6015.212894261" lastFinishedPulling="2026-01-26 16:27:00.979661468 +0000 UTC m=+6017.665124613" observedRunningTime="2026-01-26 16:27:01.577454221 +0000 UTC m=+6018.262917326" watchObservedRunningTime="2026-01-26 16:27:01.582750526 +0000 UTC m=+6018.268213651" Jan 26 16:27:05 crc kubenswrapper[4823]: I0126 16:27:05.560070 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:27:05 crc kubenswrapper[4823]: E0126 16:27:05.560871 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:27:07 crc kubenswrapper[4823]: I0126 16:27:07.367113 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:27:07 crc kubenswrapper[4823]: I0126 16:27:07.367350 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:27:07 crc kubenswrapper[4823]: I0126 16:27:07.418089 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:27:07 crc kubenswrapper[4823]: I0126 16:27:07.669436 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:27:07 crc kubenswrapper[4823]: I0126 16:27:07.718641 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bg25j"] Jan 26 16:27:09 crc kubenswrapper[4823]: I0126 16:27:09.633935 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bg25j" podUID="3e2cb1ed-2896-46a3-958b-c44bd8f31430" containerName="registry-server" containerID="cri-o://d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64" gracePeriod=2 Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.276384 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.446653 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2cb1ed-2896-46a3-958b-c44bd8f31430-catalog-content\") pod \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\" (UID: \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\") " Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.446781 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2cb1ed-2896-46a3-958b-c44bd8f31430-utilities\") pod \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\" (UID: \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\") " Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.446872 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95hmw\" (UniqueName: \"kubernetes.io/projected/3e2cb1ed-2896-46a3-958b-c44bd8f31430-kube-api-access-95hmw\") pod \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\" (UID: \"3e2cb1ed-2896-46a3-958b-c44bd8f31430\") " Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.449297 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e2cb1ed-2896-46a3-958b-c44bd8f31430-utilities" (OuterVolumeSpecName: "utilities") pod "3e2cb1ed-2896-46a3-958b-c44bd8f31430" (UID: "3e2cb1ed-2896-46a3-958b-c44bd8f31430"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.452964 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e2cb1ed-2896-46a3-958b-c44bd8f31430-kube-api-access-95hmw" (OuterVolumeSpecName: "kube-api-access-95hmw") pod "3e2cb1ed-2896-46a3-958b-c44bd8f31430" (UID: "3e2cb1ed-2896-46a3-958b-c44bd8f31430"). InnerVolumeSpecName "kube-api-access-95hmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.472239 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e2cb1ed-2896-46a3-958b-c44bd8f31430-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e2cb1ed-2896-46a3-958b-c44bd8f31430" (UID: "3e2cb1ed-2896-46a3-958b-c44bd8f31430"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.549037 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e2cb1ed-2896-46a3-958b-c44bd8f31430-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.549073 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e2cb1ed-2896-46a3-958b-c44bd8f31430-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.549086 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95hmw\" (UniqueName: \"kubernetes.io/projected/3e2cb1ed-2896-46a3-958b-c44bd8f31430-kube-api-access-95hmw\") on node \"crc\" DevicePath \"\"" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.644592 4823 generic.go:334] "Generic (PLEG): container finished" podID="3e2cb1ed-2896-46a3-958b-c44bd8f31430" containerID="d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64" exitCode=0 Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.644637 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg25j" event={"ID":"3e2cb1ed-2896-46a3-958b-c44bd8f31430","Type":"ContainerDied","Data":"d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64"} Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.644667 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bg25j" event={"ID":"3e2cb1ed-2896-46a3-958b-c44bd8f31430","Type":"ContainerDied","Data":"d00958d168a590eed4e8ee5609a9fa694b9130b0a6d00213a972b454d3b8e60d"} Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.644684 4823 scope.go:117] "RemoveContainer" containerID="d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.644763 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bg25j" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.665540 4823 scope.go:117] "RemoveContainer" containerID="9cb9aa3054d4c79d9cf3173282706250c5a985013d2254559a9cc947f88f8404" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.689510 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bg25j"] Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.698345 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bg25j"] Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.713440 4823 scope.go:117] "RemoveContainer" containerID="6ed3aaafdc73ae62ff1d085aa2c532ec9f817e471b96ab889f1652f514c14371" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.748337 4823 scope.go:117] "RemoveContainer" containerID="d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64" Jan 26 16:27:10 crc kubenswrapper[4823]: E0126 16:27:10.749596 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64\": container with ID starting with d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64 not found: ID does not exist" containerID="d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.749649 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64"} err="failed to get container status \"d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64\": rpc error: code = NotFound desc = could not find container \"d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64\": container with ID starting with d191d72b8a7395df2caa6124d80bb871421c4db8517fe497e236157ed495fa64 not found: ID does not exist" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.749682 4823 scope.go:117] "RemoveContainer" containerID="9cb9aa3054d4c79d9cf3173282706250c5a985013d2254559a9cc947f88f8404" Jan 26 16:27:10 crc kubenswrapper[4823]: E0126 16:27:10.750288 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cb9aa3054d4c79d9cf3173282706250c5a985013d2254559a9cc947f88f8404\": container with ID starting with 9cb9aa3054d4c79d9cf3173282706250c5a985013d2254559a9cc947f88f8404 not found: ID does not exist" containerID="9cb9aa3054d4c79d9cf3173282706250c5a985013d2254559a9cc947f88f8404" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.750323 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cb9aa3054d4c79d9cf3173282706250c5a985013d2254559a9cc947f88f8404"} err="failed to get container status \"9cb9aa3054d4c79d9cf3173282706250c5a985013d2254559a9cc947f88f8404\": rpc error: code = NotFound desc = could not find container \"9cb9aa3054d4c79d9cf3173282706250c5a985013d2254559a9cc947f88f8404\": container with ID starting with 9cb9aa3054d4c79d9cf3173282706250c5a985013d2254559a9cc947f88f8404 not found: ID does not exist" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.750351 4823 scope.go:117] "RemoveContainer" containerID="6ed3aaafdc73ae62ff1d085aa2c532ec9f817e471b96ab889f1652f514c14371" Jan 26 16:27:10 crc kubenswrapper[4823]: E0126 16:27:10.750632 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed3aaafdc73ae62ff1d085aa2c532ec9f817e471b96ab889f1652f514c14371\": container with ID starting with 6ed3aaafdc73ae62ff1d085aa2c532ec9f817e471b96ab889f1652f514c14371 not found: ID does not exist" containerID="6ed3aaafdc73ae62ff1d085aa2c532ec9f817e471b96ab889f1652f514c14371" Jan 26 16:27:10 crc kubenswrapper[4823]: I0126 16:27:10.750655 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed3aaafdc73ae62ff1d085aa2c532ec9f817e471b96ab889f1652f514c14371"} err="failed to get container status \"6ed3aaafdc73ae62ff1d085aa2c532ec9f817e471b96ab889f1652f514c14371\": rpc error: code = NotFound desc = could not find container \"6ed3aaafdc73ae62ff1d085aa2c532ec9f817e471b96ab889f1652f514c14371\": container with ID starting with 6ed3aaafdc73ae62ff1d085aa2c532ec9f817e471b96ab889f1652f514c14371 not found: ID does not exist" Jan 26 16:27:11 crc kubenswrapper[4823]: I0126 16:27:11.583347 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e2cb1ed-2896-46a3-958b-c44bd8f31430" path="/var/lib/kubelet/pods/3e2cb1ed-2896-46a3-958b-c44bd8f31430/volumes" Jan 26 16:27:16 crc kubenswrapper[4823]: I0126 16:27:16.581730 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:27:16 crc kubenswrapper[4823]: E0126 16:27:16.587750 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:27:28 crc kubenswrapper[4823]: I0126 16:27:28.561699 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:27:28 crc kubenswrapper[4823]: E0126 16:27:28.563109 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:27:41 crc kubenswrapper[4823]: I0126 16:27:41.560492 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:27:41 crc kubenswrapper[4823]: E0126 16:27:41.561623 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:27:55 crc kubenswrapper[4823]: I0126 16:27:55.561160 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:27:55 crc kubenswrapper[4823]: E0126 16:27:55.562160 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:28:06 crc kubenswrapper[4823]: I0126 16:28:06.561067 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:28:06 crc kubenswrapper[4823]: E0126 16:28:06.563516 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:28:17 crc kubenswrapper[4823]: I0126 16:28:17.561090 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:28:17 crc kubenswrapper[4823]: E0126 16:28:17.562022 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:28:30 crc kubenswrapper[4823]: I0126 16:28:30.560321 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:28:30 crc kubenswrapper[4823]: E0126 16:28:30.561205 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:28:42 crc kubenswrapper[4823]: I0126 16:28:42.560589 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:28:43 crc kubenswrapper[4823]: I0126 16:28:43.485138 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"de4bb6e0c46f95b31608fa9c4274f1460b6714adb9b80afd54cefd681817a88d"} Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.169107 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs"] Jan 26 16:30:00 crc kubenswrapper[4823]: E0126 16:30:00.170883 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2cb1ed-2896-46a3-958b-c44bd8f31430" containerName="registry-server" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.170905 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2cb1ed-2896-46a3-958b-c44bd8f31430" containerName="registry-server" Jan 26 16:30:00 crc kubenswrapper[4823]: E0126 16:30:00.170922 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2cb1ed-2896-46a3-958b-c44bd8f31430" containerName="extract-utilities" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.170931 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2cb1ed-2896-46a3-958b-c44bd8f31430" containerName="extract-utilities" Jan 26 16:30:00 crc kubenswrapper[4823]: E0126 16:30:00.170977 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e2cb1ed-2896-46a3-958b-c44bd8f31430" containerName="extract-content" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.170985 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e2cb1ed-2896-46a3-958b-c44bd8f31430" containerName="extract-content" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.171268 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e2cb1ed-2896-46a3-958b-c44bd8f31430" containerName="registry-server" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.172512 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.175517 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.176807 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.178786 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs"] Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.285707 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1136075-30a5-40bc-918e-59c818a5d71f-secret-volume\") pod \"collect-profiles-29490750-2d5zs\" (UID: \"e1136075-30a5-40bc-918e-59c818a5d71f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.285784 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qk22\" (UniqueName: \"kubernetes.io/projected/e1136075-30a5-40bc-918e-59c818a5d71f-kube-api-access-9qk22\") pod \"collect-profiles-29490750-2d5zs\" (UID: \"e1136075-30a5-40bc-918e-59c818a5d71f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.286056 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1136075-30a5-40bc-918e-59c818a5d71f-config-volume\") pod \"collect-profiles-29490750-2d5zs\" (UID: \"e1136075-30a5-40bc-918e-59c818a5d71f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.388409 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1136075-30a5-40bc-918e-59c818a5d71f-config-volume\") pod \"collect-profiles-29490750-2d5zs\" (UID: \"e1136075-30a5-40bc-918e-59c818a5d71f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.388623 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1136075-30a5-40bc-918e-59c818a5d71f-secret-volume\") pod \"collect-profiles-29490750-2d5zs\" (UID: \"e1136075-30a5-40bc-918e-59c818a5d71f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.388669 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qk22\" (UniqueName: \"kubernetes.io/projected/e1136075-30a5-40bc-918e-59c818a5d71f-kube-api-access-9qk22\") pod \"collect-profiles-29490750-2d5zs\" (UID: \"e1136075-30a5-40bc-918e-59c818a5d71f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.390356 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1136075-30a5-40bc-918e-59c818a5d71f-config-volume\") pod \"collect-profiles-29490750-2d5zs\" (UID: \"e1136075-30a5-40bc-918e-59c818a5d71f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.412245 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1136075-30a5-40bc-918e-59c818a5d71f-secret-volume\") pod \"collect-profiles-29490750-2d5zs\" (UID: \"e1136075-30a5-40bc-918e-59c818a5d71f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.415100 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qk22\" (UniqueName: \"kubernetes.io/projected/e1136075-30a5-40bc-918e-59c818a5d71f-kube-api-access-9qk22\") pod \"collect-profiles-29490750-2d5zs\" (UID: \"e1136075-30a5-40bc-918e-59c818a5d71f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:00 crc kubenswrapper[4823]: I0126 16:30:00.498827 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:01 crc kubenswrapper[4823]: I0126 16:30:01.035613 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs"] Jan 26 16:30:01 crc kubenswrapper[4823]: I0126 16:30:01.162675 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" event={"ID":"e1136075-30a5-40bc-918e-59c818a5d71f","Type":"ContainerStarted","Data":"30e8e6daa9c78e6db317a33d17fb86d2cdd66cafb24c46c4f0333aea4ba6b014"} Jan 26 16:30:02 crc kubenswrapper[4823]: I0126 16:30:02.172792 4823 generic.go:334] "Generic (PLEG): container finished" podID="e1136075-30a5-40bc-918e-59c818a5d71f" containerID="d7b85ff8844ac8a62f757be50a33d3a024f10640b922aff53364eeee41d20cb2" exitCode=0 Jan 26 16:30:02 crc kubenswrapper[4823]: I0126 16:30:02.172882 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" event={"ID":"e1136075-30a5-40bc-918e-59c818a5d71f","Type":"ContainerDied","Data":"d7b85ff8844ac8a62f757be50a33d3a024f10640b922aff53364eeee41d20cb2"} Jan 26 16:30:03 crc kubenswrapper[4823]: I0126 16:30:03.609171 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:03 crc kubenswrapper[4823]: I0126 16:30:03.674115 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1136075-30a5-40bc-918e-59c818a5d71f-secret-volume\") pod \"e1136075-30a5-40bc-918e-59c818a5d71f\" (UID: \"e1136075-30a5-40bc-918e-59c818a5d71f\") " Jan 26 16:30:03 crc kubenswrapper[4823]: I0126 16:30:03.674560 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qk22\" (UniqueName: \"kubernetes.io/projected/e1136075-30a5-40bc-918e-59c818a5d71f-kube-api-access-9qk22\") pod \"e1136075-30a5-40bc-918e-59c818a5d71f\" (UID: \"e1136075-30a5-40bc-918e-59c818a5d71f\") " Jan 26 16:30:03 crc kubenswrapper[4823]: I0126 16:30:03.674735 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1136075-30a5-40bc-918e-59c818a5d71f-config-volume\") pod \"e1136075-30a5-40bc-918e-59c818a5d71f\" (UID: \"e1136075-30a5-40bc-918e-59c818a5d71f\") " Jan 26 16:30:03 crc kubenswrapper[4823]: I0126 16:30:03.675232 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1136075-30a5-40bc-918e-59c818a5d71f-config-volume" (OuterVolumeSpecName: "config-volume") pod "e1136075-30a5-40bc-918e-59c818a5d71f" (UID: "e1136075-30a5-40bc-918e-59c818a5d71f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:30:03 crc kubenswrapper[4823]: I0126 16:30:03.675453 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1136075-30a5-40bc-918e-59c818a5d71f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:03 crc kubenswrapper[4823]: I0126 16:30:03.680507 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1136075-30a5-40bc-918e-59c818a5d71f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e1136075-30a5-40bc-918e-59c818a5d71f" (UID: "e1136075-30a5-40bc-918e-59c818a5d71f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:30:03 crc kubenswrapper[4823]: I0126 16:30:03.680969 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1136075-30a5-40bc-918e-59c818a5d71f-kube-api-access-9qk22" (OuterVolumeSpecName: "kube-api-access-9qk22") pod "e1136075-30a5-40bc-918e-59c818a5d71f" (UID: "e1136075-30a5-40bc-918e-59c818a5d71f"). InnerVolumeSpecName "kube-api-access-9qk22". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:30:03 crc kubenswrapper[4823]: I0126 16:30:03.776699 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1136075-30a5-40bc-918e-59c818a5d71f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:03 crc kubenswrapper[4823]: I0126 16:30:03.776741 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qk22\" (UniqueName: \"kubernetes.io/projected/e1136075-30a5-40bc-918e-59c818a5d71f-kube-api-access-9qk22\") on node \"crc\" DevicePath \"\"" Jan 26 16:30:04 crc kubenswrapper[4823]: I0126 16:30:04.192474 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" event={"ID":"e1136075-30a5-40bc-918e-59c818a5d71f","Type":"ContainerDied","Data":"30e8e6daa9c78e6db317a33d17fb86d2cdd66cafb24c46c4f0333aea4ba6b014"} Jan 26 16:30:04 crc kubenswrapper[4823]: I0126 16:30:04.192518 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs" Jan 26 16:30:04 crc kubenswrapper[4823]: I0126 16:30:04.192525 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30e8e6daa9c78e6db317a33d17fb86d2cdd66cafb24c46c4f0333aea4ba6b014" Jan 26 16:30:04 crc kubenswrapper[4823]: I0126 16:30:04.686678 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw"] Jan 26 16:30:04 crc kubenswrapper[4823]: I0126 16:30:04.694358 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490705-7wgkw"] Jan 26 16:30:05 crc kubenswrapper[4823]: I0126 16:30:05.572121 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1affab3-fe81-427e-a854-2f53a8f705f1" path="/var/lib/kubelet/pods/e1affab3-fe81-427e-a854-2f53a8f705f1/volumes" Jan 26 16:30:08 crc kubenswrapper[4823]: I0126 16:30:08.622881 4823 scope.go:117] "RemoveContainer" containerID="74768d6742faded9cb18583e9239d41c6892e48f1f0775a43d05652514070de1" Jan 26 16:31:04 crc kubenswrapper[4823]: I0126 16:31:04.507948 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:31:04 crc kubenswrapper[4823]: I0126 16:31:04.509768 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:31:34 crc kubenswrapper[4823]: I0126 16:31:34.508536 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:31:34 crc kubenswrapper[4823]: I0126 16:31:34.509058 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:32:04 crc kubenswrapper[4823]: I0126 16:32:04.508553 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:32:04 crc kubenswrapper[4823]: I0126 16:32:04.509531 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:32:04 crc kubenswrapper[4823]: I0126 16:32:04.509661 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 16:32:04 crc kubenswrapper[4823]: I0126 16:32:04.510890 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de4bb6e0c46f95b31608fa9c4274f1460b6714adb9b80afd54cefd681817a88d"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:32:04 crc kubenswrapper[4823]: I0126 16:32:04.510964 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://de4bb6e0c46f95b31608fa9c4274f1460b6714adb9b80afd54cefd681817a88d" gracePeriod=600 Jan 26 16:32:05 crc kubenswrapper[4823]: I0126 16:32:05.352743 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="de4bb6e0c46f95b31608fa9c4274f1460b6714adb9b80afd54cefd681817a88d" exitCode=0 Jan 26 16:32:05 crc kubenswrapper[4823]: I0126 16:32:05.352785 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"de4bb6e0c46f95b31608fa9c4274f1460b6714adb9b80afd54cefd681817a88d"} Jan 26 16:32:05 crc kubenswrapper[4823]: I0126 16:32:05.353296 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741"} Jan 26 16:32:05 crc kubenswrapper[4823]: I0126 16:32:05.353320 4823 scope.go:117] "RemoveContainer" containerID="a4e94ef1ee8f5ff58ca008393fb25f8c35ced7c2bbad8ada29698e16ced12b2f" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.662711 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nxj8d"] Jan 26 16:32:45 crc kubenswrapper[4823]: E0126 16:32:45.663805 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1136075-30a5-40bc-918e-59c818a5d71f" containerName="collect-profiles" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.663821 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1136075-30a5-40bc-918e-59c818a5d71f" containerName="collect-profiles" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.664048 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1136075-30a5-40bc-918e-59c818a5d71f" containerName="collect-profiles" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.665706 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.692078 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nxj8d"] Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.829196 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1605aaa3-f856-4e76-9294-f80f2f565be1-utilities\") pod \"certified-operators-nxj8d\" (UID: \"1605aaa3-f856-4e76-9294-f80f2f565be1\") " pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.829299 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqcvb\" (UniqueName: \"kubernetes.io/projected/1605aaa3-f856-4e76-9294-f80f2f565be1-kube-api-access-nqcvb\") pod \"certified-operators-nxj8d\" (UID: \"1605aaa3-f856-4e76-9294-f80f2f565be1\") " pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.829353 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1605aaa3-f856-4e76-9294-f80f2f565be1-catalog-content\") pod \"certified-operators-nxj8d\" (UID: \"1605aaa3-f856-4e76-9294-f80f2f565be1\") " pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.931190 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1605aaa3-f856-4e76-9294-f80f2f565be1-utilities\") pod \"certified-operators-nxj8d\" (UID: \"1605aaa3-f856-4e76-9294-f80f2f565be1\") " pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.931309 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqcvb\" (UniqueName: \"kubernetes.io/projected/1605aaa3-f856-4e76-9294-f80f2f565be1-kube-api-access-nqcvb\") pod \"certified-operators-nxj8d\" (UID: \"1605aaa3-f856-4e76-9294-f80f2f565be1\") " pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.931379 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1605aaa3-f856-4e76-9294-f80f2f565be1-catalog-content\") pod \"certified-operators-nxj8d\" (UID: \"1605aaa3-f856-4e76-9294-f80f2f565be1\") " pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.931985 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1605aaa3-f856-4e76-9294-f80f2f565be1-utilities\") pod \"certified-operators-nxj8d\" (UID: \"1605aaa3-f856-4e76-9294-f80f2f565be1\") " pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.932040 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1605aaa3-f856-4e76-9294-f80f2f565be1-catalog-content\") pod \"certified-operators-nxj8d\" (UID: \"1605aaa3-f856-4e76-9294-f80f2f565be1\") " pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.955620 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqcvb\" (UniqueName: \"kubernetes.io/projected/1605aaa3-f856-4e76-9294-f80f2f565be1-kube-api-access-nqcvb\") pod \"certified-operators-nxj8d\" (UID: \"1605aaa3-f856-4e76-9294-f80f2f565be1\") " pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:45 crc kubenswrapper[4823]: I0126 16:32:45.986630 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:46 crc kubenswrapper[4823]: I0126 16:32:46.543889 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nxj8d"] Jan 26 16:32:46 crc kubenswrapper[4823]: I0126 16:32:46.761071 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj8d" event={"ID":"1605aaa3-f856-4e76-9294-f80f2f565be1","Type":"ContainerStarted","Data":"3d0eec9e782059aed77d6e2e8a324b8f922ac7099d787a5d304c01b11a4b2b82"} Jan 26 16:32:47 crc kubenswrapper[4823]: I0126 16:32:47.773539 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj8d" event={"ID":"1605aaa3-f856-4e76-9294-f80f2f565be1","Type":"ContainerDied","Data":"95e21cf53441a6f52a4ff1ebc050cb254efaff513cee485d2b19e396a9e9fc25"} Jan 26 16:32:47 crc kubenswrapper[4823]: I0126 16:32:47.776016 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:32:47 crc kubenswrapper[4823]: I0126 16:32:47.773537 4823 generic.go:334] "Generic (PLEG): container finished" podID="1605aaa3-f856-4e76-9294-f80f2f565be1" containerID="95e21cf53441a6f52a4ff1ebc050cb254efaff513cee485d2b19e396a9e9fc25" exitCode=0 Jan 26 16:32:48 crc kubenswrapper[4823]: I0126 16:32:48.786172 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj8d" event={"ID":"1605aaa3-f856-4e76-9294-f80f2f565be1","Type":"ContainerStarted","Data":"3548b4169b42c6a9dce7933a9d746c6d2dac5400c8e106d77eb5528205fddf90"} Jan 26 16:32:49 crc kubenswrapper[4823]: I0126 16:32:49.809449 4823 generic.go:334] "Generic (PLEG): container finished" podID="1605aaa3-f856-4e76-9294-f80f2f565be1" containerID="3548b4169b42c6a9dce7933a9d746c6d2dac5400c8e106d77eb5528205fddf90" exitCode=0 Jan 26 16:32:49 crc kubenswrapper[4823]: I0126 16:32:49.809594 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj8d" event={"ID":"1605aaa3-f856-4e76-9294-f80f2f565be1","Type":"ContainerDied","Data":"3548b4169b42c6a9dce7933a9d746c6d2dac5400c8e106d77eb5528205fddf90"} Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.420845 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pjfzg"] Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.423654 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.430847 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pjfzg"] Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.524146 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmlbq\" (UniqueName: \"kubernetes.io/projected/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-kube-api-access-lmlbq\") pod \"community-operators-pjfzg\" (UID: \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\") " pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.524254 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-catalog-content\") pod \"community-operators-pjfzg\" (UID: \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\") " pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.524474 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-utilities\") pod \"community-operators-pjfzg\" (UID: \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\") " pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.626773 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmlbq\" (UniqueName: \"kubernetes.io/projected/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-kube-api-access-lmlbq\") pod \"community-operators-pjfzg\" (UID: \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\") " pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.626925 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-catalog-content\") pod \"community-operators-pjfzg\" (UID: \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\") " pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.626965 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-utilities\") pod \"community-operators-pjfzg\" (UID: \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\") " pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.627689 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-utilities\") pod \"community-operators-pjfzg\" (UID: \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\") " pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.627699 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-catalog-content\") pod \"community-operators-pjfzg\" (UID: \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\") " pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.664995 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmlbq\" (UniqueName: \"kubernetes.io/projected/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-kube-api-access-lmlbq\") pod \"community-operators-pjfzg\" (UID: \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\") " pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.755088 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.827921 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj8d" event={"ID":"1605aaa3-f856-4e76-9294-f80f2f565be1","Type":"ContainerStarted","Data":"f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3"} Jan 26 16:32:50 crc kubenswrapper[4823]: I0126 16:32:50.850145 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nxj8d" podStartSLOduration=3.395991468 podStartE2EDuration="5.850126372s" podCreationTimestamp="2026-01-26 16:32:45 +0000 UTC" firstStartedPulling="2026-01-26 16:32:47.775735461 +0000 UTC m=+6364.461198586" lastFinishedPulling="2026-01-26 16:32:50.229870385 +0000 UTC m=+6366.915333490" observedRunningTime="2026-01-26 16:32:50.84782434 +0000 UTC m=+6367.533287445" watchObservedRunningTime="2026-01-26 16:32:50.850126372 +0000 UTC m=+6367.535589477" Jan 26 16:32:51 crc kubenswrapper[4823]: I0126 16:32:51.308990 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pjfzg"] Jan 26 16:32:51 crc kubenswrapper[4823]: W0126 16:32:51.320047 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32fa40ed_acf6_43ec_8e82_1cc343c7bf44.slice/crio-22a646f0fc3476846501fcfc97ccd6faff3cdd8a2a63cf303ee286794232f9a5 WatchSource:0}: Error finding container 22a646f0fc3476846501fcfc97ccd6faff3cdd8a2a63cf303ee286794232f9a5: Status 404 returned error can't find the container with id 22a646f0fc3476846501fcfc97ccd6faff3cdd8a2a63cf303ee286794232f9a5 Jan 26 16:32:51 crc kubenswrapper[4823]: I0126 16:32:51.840540 4823 generic.go:334] "Generic (PLEG): container finished" podID="32fa40ed-acf6-43ec-8e82-1cc343c7bf44" containerID="61cdbe71d576ea692ccd1025287a6ce0e2bc47d5feb6200d4ad45e7a7bf7b8d7" exitCode=0 Jan 26 16:32:51 crc kubenswrapper[4823]: I0126 16:32:51.840764 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjfzg" event={"ID":"32fa40ed-acf6-43ec-8e82-1cc343c7bf44","Type":"ContainerDied","Data":"61cdbe71d576ea692ccd1025287a6ce0e2bc47d5feb6200d4ad45e7a7bf7b8d7"} Jan 26 16:32:51 crc kubenswrapper[4823]: I0126 16:32:51.840936 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjfzg" event={"ID":"32fa40ed-acf6-43ec-8e82-1cc343c7bf44","Type":"ContainerStarted","Data":"22a646f0fc3476846501fcfc97ccd6faff3cdd8a2a63cf303ee286794232f9a5"} Jan 26 16:32:52 crc kubenswrapper[4823]: I0126 16:32:52.851826 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjfzg" event={"ID":"32fa40ed-acf6-43ec-8e82-1cc343c7bf44","Type":"ContainerStarted","Data":"7d748bb8f0b9968e2bfc42ee061b7dbd588aca6c0ba5c378663e3f8311075528"} Jan 26 16:32:53 crc kubenswrapper[4823]: I0126 16:32:53.863905 4823 generic.go:334] "Generic (PLEG): container finished" podID="32fa40ed-acf6-43ec-8e82-1cc343c7bf44" containerID="7d748bb8f0b9968e2bfc42ee061b7dbd588aca6c0ba5c378663e3f8311075528" exitCode=0 Jan 26 16:32:53 crc kubenswrapper[4823]: I0126 16:32:53.863992 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjfzg" event={"ID":"32fa40ed-acf6-43ec-8e82-1cc343c7bf44","Type":"ContainerDied","Data":"7d748bb8f0b9968e2bfc42ee061b7dbd588aca6c0ba5c378663e3f8311075528"} Jan 26 16:32:54 crc kubenswrapper[4823]: I0126 16:32:54.874810 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjfzg" event={"ID":"32fa40ed-acf6-43ec-8e82-1cc343c7bf44","Type":"ContainerStarted","Data":"d4d1bcfe7a68cc7b306af4771ff2225d6f800eb7950c87b32249bc459d90021b"} Jan 26 16:32:54 crc kubenswrapper[4823]: I0126 16:32:54.904292 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pjfzg" podStartSLOduration=2.192414062 podStartE2EDuration="4.904267276s" podCreationTimestamp="2026-01-26 16:32:50 +0000 UTC" firstStartedPulling="2026-01-26 16:32:51.84360708 +0000 UTC m=+6368.529070185" lastFinishedPulling="2026-01-26 16:32:54.555460294 +0000 UTC m=+6371.240923399" observedRunningTime="2026-01-26 16:32:54.899736912 +0000 UTC m=+6371.585200037" watchObservedRunningTime="2026-01-26 16:32:54.904267276 +0000 UTC m=+6371.589730381" Jan 26 16:32:55 crc kubenswrapper[4823]: I0126 16:32:55.987251 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:55 crc kubenswrapper[4823]: I0126 16:32:55.988035 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:56 crc kubenswrapper[4823]: I0126 16:32:56.036879 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:56 crc kubenswrapper[4823]: I0126 16:32:56.932030 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:32:57 crc kubenswrapper[4823]: I0126 16:32:57.209825 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nxj8d"] Jan 26 16:32:58 crc kubenswrapper[4823]: I0126 16:32:58.905498 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nxj8d" podUID="1605aaa3-f856-4e76-9294-f80f2f565be1" containerName="registry-server" containerID="cri-o://f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3" gracePeriod=2 Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.747956 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.755330 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.755390 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.828976 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.849844 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1605aaa3-f856-4e76-9294-f80f2f565be1-catalog-content\") pod \"1605aaa3-f856-4e76-9294-f80f2f565be1\" (UID: \"1605aaa3-f856-4e76-9294-f80f2f565be1\") " Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.850004 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqcvb\" (UniqueName: \"kubernetes.io/projected/1605aaa3-f856-4e76-9294-f80f2f565be1-kube-api-access-nqcvb\") pod \"1605aaa3-f856-4e76-9294-f80f2f565be1\" (UID: \"1605aaa3-f856-4e76-9294-f80f2f565be1\") " Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.850095 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1605aaa3-f856-4e76-9294-f80f2f565be1-utilities\") pod \"1605aaa3-f856-4e76-9294-f80f2f565be1\" (UID: \"1605aaa3-f856-4e76-9294-f80f2f565be1\") " Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.852390 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1605aaa3-f856-4e76-9294-f80f2f565be1-utilities" (OuterVolumeSpecName: "utilities") pod "1605aaa3-f856-4e76-9294-f80f2f565be1" (UID: "1605aaa3-f856-4e76-9294-f80f2f565be1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.880159 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1605aaa3-f856-4e76-9294-f80f2f565be1-kube-api-access-nqcvb" (OuterVolumeSpecName: "kube-api-access-nqcvb") pod "1605aaa3-f856-4e76-9294-f80f2f565be1" (UID: "1605aaa3-f856-4e76-9294-f80f2f565be1"). InnerVolumeSpecName "kube-api-access-nqcvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.915646 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1605aaa3-f856-4e76-9294-f80f2f565be1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1605aaa3-f856-4e76-9294-f80f2f565be1" (UID: "1605aaa3-f856-4e76-9294-f80f2f565be1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.928208 4823 generic.go:334] "Generic (PLEG): container finished" podID="1605aaa3-f856-4e76-9294-f80f2f565be1" containerID="f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3" exitCode=0 Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.929382 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nxj8d" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.929515 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj8d" event={"ID":"1605aaa3-f856-4e76-9294-f80f2f565be1","Type":"ContainerDied","Data":"f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3"} Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.929685 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nxj8d" event={"ID":"1605aaa3-f856-4e76-9294-f80f2f565be1","Type":"ContainerDied","Data":"3d0eec9e782059aed77d6e2e8a324b8f922ac7099d787a5d304c01b11a4b2b82"} Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.929774 4823 scope.go:117] "RemoveContainer" containerID="f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.951144 4823 scope.go:117] "RemoveContainer" containerID="3548b4169b42c6a9dce7933a9d746c6d2dac5400c8e106d77eb5528205fddf90" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.952477 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqcvb\" (UniqueName: \"kubernetes.io/projected/1605aaa3-f856-4e76-9294-f80f2f565be1-kube-api-access-nqcvb\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.952686 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1605aaa3-f856-4e76-9294-f80f2f565be1-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.952853 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1605aaa3-f856-4e76-9294-f80f2f565be1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.974345 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nxj8d"] Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.978397 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.982942 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nxj8d"] Jan 26 16:33:00 crc kubenswrapper[4823]: I0126 16:33:00.999655 4823 scope.go:117] "RemoveContainer" containerID="95e21cf53441a6f52a4ff1ebc050cb254efaff513cee485d2b19e396a9e9fc25" Jan 26 16:33:01 crc kubenswrapper[4823]: I0126 16:33:01.019758 4823 scope.go:117] "RemoveContainer" containerID="f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3" Jan 26 16:33:01 crc kubenswrapper[4823]: E0126 16:33:01.021082 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3\": container with ID starting with f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3 not found: ID does not exist" containerID="f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3" Jan 26 16:33:01 crc kubenswrapper[4823]: I0126 16:33:01.021134 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3"} err="failed to get container status \"f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3\": rpc error: code = NotFound desc = could not find container \"f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3\": container with ID starting with f2b54ac3362b241a33807ad109430718389806bb7a19d654f0302f063ac7e4d3 not found: ID does not exist" Jan 26 16:33:01 crc kubenswrapper[4823]: I0126 16:33:01.021165 4823 scope.go:117] "RemoveContainer" containerID="3548b4169b42c6a9dce7933a9d746c6d2dac5400c8e106d77eb5528205fddf90" Jan 26 16:33:01 crc kubenswrapper[4823]: E0126 16:33:01.021484 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3548b4169b42c6a9dce7933a9d746c6d2dac5400c8e106d77eb5528205fddf90\": container with ID starting with 3548b4169b42c6a9dce7933a9d746c6d2dac5400c8e106d77eb5528205fddf90 not found: ID does not exist" containerID="3548b4169b42c6a9dce7933a9d746c6d2dac5400c8e106d77eb5528205fddf90" Jan 26 16:33:01 crc kubenswrapper[4823]: I0126 16:33:01.021524 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3548b4169b42c6a9dce7933a9d746c6d2dac5400c8e106d77eb5528205fddf90"} err="failed to get container status \"3548b4169b42c6a9dce7933a9d746c6d2dac5400c8e106d77eb5528205fddf90\": rpc error: code = NotFound desc = could not find container \"3548b4169b42c6a9dce7933a9d746c6d2dac5400c8e106d77eb5528205fddf90\": container with ID starting with 3548b4169b42c6a9dce7933a9d746c6d2dac5400c8e106d77eb5528205fddf90 not found: ID does not exist" Jan 26 16:33:01 crc kubenswrapper[4823]: I0126 16:33:01.021553 4823 scope.go:117] "RemoveContainer" containerID="95e21cf53441a6f52a4ff1ebc050cb254efaff513cee485d2b19e396a9e9fc25" Jan 26 16:33:01 crc kubenswrapper[4823]: E0126 16:33:01.021807 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95e21cf53441a6f52a4ff1ebc050cb254efaff513cee485d2b19e396a9e9fc25\": container with ID starting with 95e21cf53441a6f52a4ff1ebc050cb254efaff513cee485d2b19e396a9e9fc25 not found: ID does not exist" containerID="95e21cf53441a6f52a4ff1ebc050cb254efaff513cee485d2b19e396a9e9fc25" Jan 26 16:33:01 crc kubenswrapper[4823]: I0126 16:33:01.021900 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95e21cf53441a6f52a4ff1ebc050cb254efaff513cee485d2b19e396a9e9fc25"} err="failed to get container status \"95e21cf53441a6f52a4ff1ebc050cb254efaff513cee485d2b19e396a9e9fc25\": rpc error: code = NotFound desc = could not find container \"95e21cf53441a6f52a4ff1ebc050cb254efaff513cee485d2b19e396a9e9fc25\": container with ID starting with 95e21cf53441a6f52a4ff1ebc050cb254efaff513cee485d2b19e396a9e9fc25 not found: ID does not exist" Jan 26 16:33:01 crc kubenswrapper[4823]: I0126 16:33:01.578697 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1605aaa3-f856-4e76-9294-f80f2f565be1" path="/var/lib/kubelet/pods/1605aaa3-f856-4e76-9294-f80f2f565be1/volumes" Jan 26 16:33:02 crc kubenswrapper[4823]: I0126 16:33:02.611733 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pjfzg"] Jan 26 16:33:02 crc kubenswrapper[4823]: I0126 16:33:02.952103 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pjfzg" podUID="32fa40ed-acf6-43ec-8e82-1cc343c7bf44" containerName="registry-server" containerID="cri-o://d4d1bcfe7a68cc7b306af4771ff2225d6f800eb7950c87b32249bc459d90021b" gracePeriod=2 Jan 26 16:33:03 crc kubenswrapper[4823]: I0126 16:33:03.965950 4823 generic.go:334] "Generic (PLEG): container finished" podID="32fa40ed-acf6-43ec-8e82-1cc343c7bf44" containerID="d4d1bcfe7a68cc7b306af4771ff2225d6f800eb7950c87b32249bc459d90021b" exitCode=0 Jan 26 16:33:03 crc kubenswrapper[4823]: I0126 16:33:03.966037 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjfzg" event={"ID":"32fa40ed-acf6-43ec-8e82-1cc343c7bf44","Type":"ContainerDied","Data":"d4d1bcfe7a68cc7b306af4771ff2225d6f800eb7950c87b32249bc459d90021b"} Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.068072 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.115760 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-utilities\") pod \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\" (UID: \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\") " Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.116042 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-catalog-content\") pod \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\" (UID: \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\") " Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.116159 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmlbq\" (UniqueName: \"kubernetes.io/projected/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-kube-api-access-lmlbq\") pod \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\" (UID: \"32fa40ed-acf6-43ec-8e82-1cc343c7bf44\") " Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.116695 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-utilities" (OuterVolumeSpecName: "utilities") pod "32fa40ed-acf6-43ec-8e82-1cc343c7bf44" (UID: "32fa40ed-acf6-43ec-8e82-1cc343c7bf44"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.117208 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.140920 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-kube-api-access-lmlbq" (OuterVolumeSpecName: "kube-api-access-lmlbq") pod "32fa40ed-acf6-43ec-8e82-1cc343c7bf44" (UID: "32fa40ed-acf6-43ec-8e82-1cc343c7bf44"). InnerVolumeSpecName "kube-api-access-lmlbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.179159 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32fa40ed-acf6-43ec-8e82-1cc343c7bf44" (UID: "32fa40ed-acf6-43ec-8e82-1cc343c7bf44"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.218928 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.218960 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmlbq\" (UniqueName: \"kubernetes.io/projected/32fa40ed-acf6-43ec-8e82-1cc343c7bf44-kube-api-access-lmlbq\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.980109 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pjfzg" event={"ID":"32fa40ed-acf6-43ec-8e82-1cc343c7bf44","Type":"ContainerDied","Data":"22a646f0fc3476846501fcfc97ccd6faff3cdd8a2a63cf303ee286794232f9a5"} Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.980170 4823 scope.go:117] "RemoveContainer" containerID="d4d1bcfe7a68cc7b306af4771ff2225d6f800eb7950c87b32249bc459d90021b" Jan 26 16:33:04 crc kubenswrapper[4823]: I0126 16:33:04.980321 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pjfzg" Jan 26 16:33:05 crc kubenswrapper[4823]: I0126 16:33:05.006753 4823 scope.go:117] "RemoveContainer" containerID="7d748bb8f0b9968e2bfc42ee061b7dbd588aca6c0ba5c378663e3f8311075528" Jan 26 16:33:05 crc kubenswrapper[4823]: I0126 16:33:05.038261 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pjfzg"] Jan 26 16:33:05 crc kubenswrapper[4823]: I0126 16:33:05.046874 4823 scope.go:117] "RemoveContainer" containerID="61cdbe71d576ea692ccd1025287a6ce0e2bc47d5feb6200d4ad45e7a7bf7b8d7" Jan 26 16:33:05 crc kubenswrapper[4823]: I0126 16:33:05.055504 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pjfzg"] Jan 26 16:33:05 crc kubenswrapper[4823]: I0126 16:33:05.573330 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32fa40ed-acf6-43ec-8e82-1cc343c7bf44" path="/var/lib/kubelet/pods/32fa40ed-acf6-43ec-8e82-1cc343c7bf44/volumes" Jan 26 16:33:36 crc kubenswrapper[4823]: I0126 16:33:36.278825 4823 generic.go:334] "Generic (PLEG): container finished" podID="1529ef7b-113d-479f-b4b7-d134a51539e3" containerID="c95d8fe1be519296ed4f5ffd641140beba45cfa4cbe03c889a27f4a50ce1e91c" exitCode=0 Jan 26 16:33:36 crc kubenswrapper[4823]: I0126 16:33:36.278967 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"1529ef7b-113d-479f-b4b7-d134a51539e3","Type":"ContainerDied","Data":"c95d8fe1be519296ed4f5ffd641140beba45cfa4cbe03c889a27f4a50ce1e91c"} Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.770136 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.861745 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ceph\") pod \"1529ef7b-113d-479f-b4b7-d134a51539e3\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.861842 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"1529ef7b-113d-479f-b4b7-d134a51539e3\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.861950 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1529ef7b-113d-479f-b4b7-d134a51539e3-config-data\") pod \"1529ef7b-113d-479f-b4b7-d134a51539e3\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.862115 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhbwz\" (UniqueName: \"kubernetes.io/projected/1529ef7b-113d-479f-b4b7-d134a51539e3-kube-api-access-zhbwz\") pod \"1529ef7b-113d-479f-b4b7-d134a51539e3\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.862154 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1529ef7b-113d-479f-b4b7-d134a51539e3-test-operator-ephemeral-temporary\") pod \"1529ef7b-113d-479f-b4b7-d134a51539e3\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.862189 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ssh-key\") pod \"1529ef7b-113d-479f-b4b7-d134a51539e3\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.862243 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1529ef7b-113d-479f-b4b7-d134a51539e3-test-operator-ephemeral-workdir\") pod \"1529ef7b-113d-479f-b4b7-d134a51539e3\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.862319 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-openstack-config-secret\") pod \"1529ef7b-113d-479f-b4b7-d134a51539e3\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.862463 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1529ef7b-113d-479f-b4b7-d134a51539e3-openstack-config\") pod \"1529ef7b-113d-479f-b4b7-d134a51539e3\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.862510 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ca-certs\") pod \"1529ef7b-113d-479f-b4b7-d134a51539e3\" (UID: \"1529ef7b-113d-479f-b4b7-d134a51539e3\") " Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.866124 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1529ef7b-113d-479f-b4b7-d134a51539e3-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "1529ef7b-113d-479f-b4b7-d134a51539e3" (UID: "1529ef7b-113d-479f-b4b7-d134a51539e3"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.868109 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Jan 26 16:33:37 crc kubenswrapper[4823]: E0126 16:33:37.868640 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1529ef7b-113d-479f-b4b7-d134a51539e3" containerName="tempest-tests-tempest-tests-runner" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.868660 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1529ef7b-113d-479f-b4b7-d134a51539e3" containerName="tempest-tests-tempest-tests-runner" Jan 26 16:33:37 crc kubenswrapper[4823]: E0126 16:33:37.868676 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1605aaa3-f856-4e76-9294-f80f2f565be1" containerName="extract-content" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.868685 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1605aaa3-f856-4e76-9294-f80f2f565be1" containerName="extract-content" Jan 26 16:33:37 crc kubenswrapper[4823]: E0126 16:33:37.868705 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1605aaa3-f856-4e76-9294-f80f2f565be1" containerName="registry-server" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.868715 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1605aaa3-f856-4e76-9294-f80f2f565be1" containerName="registry-server" Jan 26 16:33:37 crc kubenswrapper[4823]: E0126 16:33:37.868732 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32fa40ed-acf6-43ec-8e82-1cc343c7bf44" containerName="extract-utilities" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.868740 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="32fa40ed-acf6-43ec-8e82-1cc343c7bf44" containerName="extract-utilities" Jan 26 16:33:37 crc kubenswrapper[4823]: E0126 16:33:37.868750 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1605aaa3-f856-4e76-9294-f80f2f565be1" containerName="extract-utilities" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.868943 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1605aaa3-f856-4e76-9294-f80f2f565be1" containerName="extract-utilities" Jan 26 16:33:37 crc kubenswrapper[4823]: E0126 16:33:37.868966 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32fa40ed-acf6-43ec-8e82-1cc343c7bf44" containerName="extract-content" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.868973 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="32fa40ed-acf6-43ec-8e82-1cc343c7bf44" containerName="extract-content" Jan 26 16:33:37 crc kubenswrapper[4823]: E0126 16:33:37.868996 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32fa40ed-acf6-43ec-8e82-1cc343c7bf44" containerName="registry-server" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.869004 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="32fa40ed-acf6-43ec-8e82-1cc343c7bf44" containerName="registry-server" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.869227 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1605aaa3-f856-4e76-9294-f80f2f565be1" containerName="registry-server" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.869386 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1529ef7b-113d-479f-b4b7-d134a51539e3" containerName="tempest-tests-tempest-tests-runner" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.869404 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="32fa40ed-acf6-43ec-8e82-1cc343c7bf44" containerName="registry-server" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.870135 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1529ef7b-113d-479f-b4b7-d134a51539e3-config-data" (OuterVolumeSpecName: "config-data") pod "1529ef7b-113d-479f-b4b7-d134a51539e3" (UID: "1529ef7b-113d-479f-b4b7-d134a51539e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.870358 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.872073 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ceph" (OuterVolumeSpecName: "ceph") pod "1529ef7b-113d-479f-b4b7-d134a51539e3" (UID: "1529ef7b-113d-479f-b4b7-d134a51539e3"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.872858 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.873124 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.873805 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "1529ef7b-113d-479f-b4b7-d134a51539e3" (UID: "1529ef7b-113d-479f-b4b7-d134a51539e3"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.875604 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1529ef7b-113d-479f-b4b7-d134a51539e3-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "1529ef7b-113d-479f-b4b7-d134a51539e3" (UID: "1529ef7b-113d-479f-b4b7-d134a51539e3"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.876679 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1529ef7b-113d-479f-b4b7-d134a51539e3-kube-api-access-zhbwz" (OuterVolumeSpecName: "kube-api-access-zhbwz") pod "1529ef7b-113d-479f-b4b7-d134a51539e3" (UID: "1529ef7b-113d-479f-b4b7-d134a51539e3"). InnerVolumeSpecName "kube-api-access-zhbwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.898511 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.903108 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "1529ef7b-113d-479f-b4b7-d134a51539e3" (UID: "1529ef7b-113d-479f-b4b7-d134a51539e3"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.909130 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1529ef7b-113d-479f-b4b7-d134a51539e3" (UID: "1529ef7b-113d-479f-b4b7-d134a51539e3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.927638 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1529ef7b-113d-479f-b4b7-d134a51539e3-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "1529ef7b-113d-479f-b4b7-d134a51539e3" (UID: "1529ef7b-113d-479f-b4b7-d134a51539e3"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.938750 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "1529ef7b-113d-479f-b4b7-d134a51539e3" (UID: "1529ef7b-113d-479f-b4b7-d134a51539e3"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.964847 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4bwq\" (UniqueName: \"kubernetes.io/projected/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-kube-api-access-q4bwq\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.964911 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965142 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965200 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965308 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965398 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965471 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965523 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965570 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965613 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965895 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1529ef7b-113d-479f-b4b7-d134a51539e3-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965924 4823 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965935 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965946 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1529ef7b-113d-479f-b4b7-d134a51539e3-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965957 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhbwz\" (UniqueName: \"kubernetes.io/projected/1529ef7b-113d-479f-b4b7-d134a51539e3-kube-api-access-zhbwz\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965969 4823 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/1529ef7b-113d-479f-b4b7-d134a51539e3-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965979 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965989 4823 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/1529ef7b-113d-479f-b4b7-d134a51539e3-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.965999 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1529ef7b-113d-479f-b4b7-d134a51539e3-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 16:33:37 crc kubenswrapper[4823]: I0126 16:33:37.997505 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.067415 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.067477 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.067539 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.067573 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.067611 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.067645 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.067676 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.067699 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.067735 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4bwq\" (UniqueName: \"kubernetes.io/projected/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-kube-api-access-q4bwq\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.068861 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.069301 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.069947 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-openstack-config\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.070892 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-config-data\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.071143 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ceph\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.071194 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ssh-key\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.071885 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ca-certs\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.072285 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.086529 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4bwq\" (UniqueName: \"kubernetes.io/projected/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-kube-api-access-q4bwq\") pod \"tempest-tests-tempest-s01-single-test\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.300431 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-full" event={"ID":"1529ef7b-113d-479f-b4b7-d134a51539e3","Type":"ContainerDied","Data":"b2616dbd7453a5e3069c71c413b7dfc576781506c2fa5bf01a6fd56a9fe74294"} Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.300480 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2616dbd7453a5e3069c71c413b7dfc576781506c2fa5bf01a6fd56a9fe74294" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.300542 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-full" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.314060 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:33:38 crc kubenswrapper[4823]: I0126 16:33:38.853350 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-test"] Jan 26 16:33:39 crc kubenswrapper[4823]: I0126 16:33:39.311126 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"61b86bcd-b461-4d98-b3ab-67a1fd95eddc","Type":"ContainerStarted","Data":"44cdefd3872edf91d8ab5fedc5a442b1cce56e31ac7cf6b5c912fbe8311981d0"} Jan 26 16:33:40 crc kubenswrapper[4823]: I0126 16:33:40.323575 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"61b86bcd-b461-4d98-b3ab-67a1fd95eddc","Type":"ContainerStarted","Data":"03031f840003d5582c94869c66075c870ebc0fe73eddc2793529f3432a01f8dc"} Jan 26 16:33:40 crc kubenswrapper[4823]: I0126 16:33:40.348946 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-test" podStartSLOduration=3.348921845 podStartE2EDuration="3.348921845s" podCreationTimestamp="2026-01-26 16:33:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 16:33:40.342945931 +0000 UTC m=+6417.028409056" watchObservedRunningTime="2026-01-26 16:33:40.348921845 +0000 UTC m=+6417.034384960" Jan 26 16:34:04 crc kubenswrapper[4823]: I0126 16:34:04.508025 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:34:04 crc kubenswrapper[4823]: I0126 16:34:04.508677 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:34:34 crc kubenswrapper[4823]: I0126 16:34:34.508161 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:34:34 crc kubenswrapper[4823]: I0126 16:34:34.508777 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:35:04 crc kubenswrapper[4823]: I0126 16:35:04.508250 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:35:04 crc kubenswrapper[4823]: I0126 16:35:04.509747 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:35:04 crc kubenswrapper[4823]: I0126 16:35:04.509855 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 16:35:04 crc kubenswrapper[4823]: I0126 16:35:04.511164 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:35:04 crc kubenswrapper[4823]: I0126 16:35:04.511294 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" gracePeriod=600 Jan 26 16:35:04 crc kubenswrapper[4823]: E0126 16:35:04.638766 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:35:05 crc kubenswrapper[4823]: I0126 16:35:05.119557 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" exitCode=0 Jan 26 16:35:05 crc kubenswrapper[4823]: I0126 16:35:05.119653 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741"} Jan 26 16:35:05 crc kubenswrapper[4823]: I0126 16:35:05.119930 4823 scope.go:117] "RemoveContainer" containerID="de4bb6e0c46f95b31608fa9c4274f1460b6714adb9b80afd54cefd681817a88d" Jan 26 16:35:05 crc kubenswrapper[4823]: I0126 16:35:05.120796 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:35:05 crc kubenswrapper[4823]: E0126 16:35:05.121060 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:35:16 crc kubenswrapper[4823]: I0126 16:35:16.560870 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:35:16 crc kubenswrapper[4823]: E0126 16:35:16.563077 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:35:29 crc kubenswrapper[4823]: I0126 16:35:29.561164 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:35:29 crc kubenswrapper[4823]: E0126 16:35:29.562103 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:35:41 crc kubenswrapper[4823]: I0126 16:35:41.560643 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:35:41 crc kubenswrapper[4823]: E0126 16:35:41.561812 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:35:55 crc kubenswrapper[4823]: I0126 16:35:55.560007 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:35:55 crc kubenswrapper[4823]: E0126 16:35:55.560733 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:36:10 crc kubenswrapper[4823]: I0126 16:36:10.560848 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:36:10 crc kubenswrapper[4823]: E0126 16:36:10.561989 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:36:11 crc kubenswrapper[4823]: I0126 16:36:11.850204 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xbkcm"] Jan 26 16:36:11 crc kubenswrapper[4823]: I0126 16:36:11.852535 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:11 crc kubenswrapper[4823]: I0126 16:36:11.876881 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xbkcm"] Jan 26 16:36:11 crc kubenswrapper[4823]: I0126 16:36:11.952111 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tglb2\" (UniqueName: \"kubernetes.io/projected/d5ebcd68-c812-400f-8919-5481ae36d4ff-kube-api-access-tglb2\") pod \"redhat-operators-xbkcm\" (UID: \"d5ebcd68-c812-400f-8919-5481ae36d4ff\") " pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:11 crc kubenswrapper[4823]: I0126 16:36:11.952245 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ebcd68-c812-400f-8919-5481ae36d4ff-utilities\") pod \"redhat-operators-xbkcm\" (UID: \"d5ebcd68-c812-400f-8919-5481ae36d4ff\") " pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:11 crc kubenswrapper[4823]: I0126 16:36:11.952387 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ebcd68-c812-400f-8919-5481ae36d4ff-catalog-content\") pod \"redhat-operators-xbkcm\" (UID: \"d5ebcd68-c812-400f-8919-5481ae36d4ff\") " pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:12 crc kubenswrapper[4823]: I0126 16:36:12.054429 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tglb2\" (UniqueName: \"kubernetes.io/projected/d5ebcd68-c812-400f-8919-5481ae36d4ff-kube-api-access-tglb2\") pod \"redhat-operators-xbkcm\" (UID: \"d5ebcd68-c812-400f-8919-5481ae36d4ff\") " pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:12 crc kubenswrapper[4823]: I0126 16:36:12.054509 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ebcd68-c812-400f-8919-5481ae36d4ff-utilities\") pod \"redhat-operators-xbkcm\" (UID: \"d5ebcd68-c812-400f-8919-5481ae36d4ff\") " pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:12 crc kubenswrapper[4823]: I0126 16:36:12.054609 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ebcd68-c812-400f-8919-5481ae36d4ff-catalog-content\") pod \"redhat-operators-xbkcm\" (UID: \"d5ebcd68-c812-400f-8919-5481ae36d4ff\") " pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:12 crc kubenswrapper[4823]: I0126 16:36:12.055096 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ebcd68-c812-400f-8919-5481ae36d4ff-utilities\") pod \"redhat-operators-xbkcm\" (UID: \"d5ebcd68-c812-400f-8919-5481ae36d4ff\") " pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:12 crc kubenswrapper[4823]: I0126 16:36:12.055104 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ebcd68-c812-400f-8919-5481ae36d4ff-catalog-content\") pod \"redhat-operators-xbkcm\" (UID: \"d5ebcd68-c812-400f-8919-5481ae36d4ff\") " pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:12 crc kubenswrapper[4823]: I0126 16:36:12.089015 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tglb2\" (UniqueName: \"kubernetes.io/projected/d5ebcd68-c812-400f-8919-5481ae36d4ff-kube-api-access-tglb2\") pod \"redhat-operators-xbkcm\" (UID: \"d5ebcd68-c812-400f-8919-5481ae36d4ff\") " pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:12 crc kubenswrapper[4823]: I0126 16:36:12.188818 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:12 crc kubenswrapper[4823]: I0126 16:36:12.707688 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xbkcm"] Jan 26 16:36:12 crc kubenswrapper[4823]: I0126 16:36:12.791485 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbkcm" event={"ID":"d5ebcd68-c812-400f-8919-5481ae36d4ff","Type":"ContainerStarted","Data":"d23a93e724934423f9078fc28ec60a64bc82f211e3ca51ad4e3800d716612b29"} Jan 26 16:36:13 crc kubenswrapper[4823]: I0126 16:36:13.803835 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5ebcd68-c812-400f-8919-5481ae36d4ff" containerID="c41808cd938de32ca8566b5b2126e57a08123ef2d88ead65cf61e3580a2e2409" exitCode=0 Jan 26 16:36:13 crc kubenswrapper[4823]: I0126 16:36:13.803937 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbkcm" event={"ID":"d5ebcd68-c812-400f-8919-5481ae36d4ff","Type":"ContainerDied","Data":"c41808cd938de32ca8566b5b2126e57a08123ef2d88ead65cf61e3580a2e2409"} Jan 26 16:36:14 crc kubenswrapper[4823]: I0126 16:36:14.814712 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbkcm" event={"ID":"d5ebcd68-c812-400f-8919-5481ae36d4ff","Type":"ContainerStarted","Data":"29f1c29cbe3e3ff34f805c242a3c7e0088240872683019fd165a15d4d3bb3295"} Jan 26 16:36:16 crc kubenswrapper[4823]: I0126 16:36:16.836729 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5ebcd68-c812-400f-8919-5481ae36d4ff" containerID="29f1c29cbe3e3ff34f805c242a3c7e0088240872683019fd165a15d4d3bb3295" exitCode=0 Jan 26 16:36:16 crc kubenswrapper[4823]: I0126 16:36:16.836834 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbkcm" event={"ID":"d5ebcd68-c812-400f-8919-5481ae36d4ff","Type":"ContainerDied","Data":"29f1c29cbe3e3ff34f805c242a3c7e0088240872683019fd165a15d4d3bb3295"} Jan 26 16:36:17 crc kubenswrapper[4823]: I0126 16:36:17.847274 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbkcm" event={"ID":"d5ebcd68-c812-400f-8919-5481ae36d4ff","Type":"ContainerStarted","Data":"b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540"} Jan 26 16:36:17 crc kubenswrapper[4823]: I0126 16:36:17.873519 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xbkcm" podStartSLOduration=3.249851167 podStartE2EDuration="6.873503777s" podCreationTimestamp="2026-01-26 16:36:11 +0000 UTC" firstStartedPulling="2026-01-26 16:36:13.808029077 +0000 UTC m=+6570.493492212" lastFinishedPulling="2026-01-26 16:36:17.431681717 +0000 UTC m=+6574.117144822" observedRunningTime="2026-01-26 16:36:17.871414419 +0000 UTC m=+6574.556877554" watchObservedRunningTime="2026-01-26 16:36:17.873503777 +0000 UTC m=+6574.558966882" Jan 26 16:36:22 crc kubenswrapper[4823]: I0126 16:36:22.189264 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:22 crc kubenswrapper[4823]: I0126 16:36:22.189917 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:23 crc kubenswrapper[4823]: I0126 16:36:23.234311 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xbkcm" podUID="d5ebcd68-c812-400f-8919-5481ae36d4ff" containerName="registry-server" probeResult="failure" output=< Jan 26 16:36:23 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 16:36:23 crc kubenswrapper[4823]: > Jan 26 16:36:25 crc kubenswrapper[4823]: I0126 16:36:25.560803 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:36:25 crc kubenswrapper[4823]: E0126 16:36:25.561568 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:36:32 crc kubenswrapper[4823]: I0126 16:36:32.238687 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:32 crc kubenswrapper[4823]: I0126 16:36:32.302543 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:32 crc kubenswrapper[4823]: I0126 16:36:32.485024 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xbkcm"] Jan 26 16:36:33 crc kubenswrapper[4823]: I0126 16:36:33.998207 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xbkcm" podUID="d5ebcd68-c812-400f-8919-5481ae36d4ff" containerName="registry-server" containerID="cri-o://b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540" gracePeriod=2 Jan 26 16:36:34 crc kubenswrapper[4823]: I0126 16:36:34.525932 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:34 crc kubenswrapper[4823]: I0126 16:36:34.675697 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tglb2\" (UniqueName: \"kubernetes.io/projected/d5ebcd68-c812-400f-8919-5481ae36d4ff-kube-api-access-tglb2\") pod \"d5ebcd68-c812-400f-8919-5481ae36d4ff\" (UID: \"d5ebcd68-c812-400f-8919-5481ae36d4ff\") " Jan 26 16:36:34 crc kubenswrapper[4823]: I0126 16:36:34.676261 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ebcd68-c812-400f-8919-5481ae36d4ff-utilities\") pod \"d5ebcd68-c812-400f-8919-5481ae36d4ff\" (UID: \"d5ebcd68-c812-400f-8919-5481ae36d4ff\") " Jan 26 16:36:34 crc kubenswrapper[4823]: I0126 16:36:34.676974 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ebcd68-c812-400f-8919-5481ae36d4ff-catalog-content\") pod \"d5ebcd68-c812-400f-8919-5481ae36d4ff\" (UID: \"d5ebcd68-c812-400f-8919-5481ae36d4ff\") " Jan 26 16:36:34 crc kubenswrapper[4823]: I0126 16:36:34.677262 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ebcd68-c812-400f-8919-5481ae36d4ff-utilities" (OuterVolumeSpecName: "utilities") pod "d5ebcd68-c812-400f-8919-5481ae36d4ff" (UID: "d5ebcd68-c812-400f-8919-5481ae36d4ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:36:34 crc kubenswrapper[4823]: I0126 16:36:34.677835 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5ebcd68-c812-400f-8919-5481ae36d4ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:36:34 crc kubenswrapper[4823]: I0126 16:36:34.684440 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ebcd68-c812-400f-8919-5481ae36d4ff-kube-api-access-tglb2" (OuterVolumeSpecName: "kube-api-access-tglb2") pod "d5ebcd68-c812-400f-8919-5481ae36d4ff" (UID: "d5ebcd68-c812-400f-8919-5481ae36d4ff"). InnerVolumeSpecName "kube-api-access-tglb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:36:34 crc kubenswrapper[4823]: I0126 16:36:34.780211 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tglb2\" (UniqueName: \"kubernetes.io/projected/d5ebcd68-c812-400f-8919-5481ae36d4ff-kube-api-access-tglb2\") on node \"crc\" DevicePath \"\"" Jan 26 16:36:34 crc kubenswrapper[4823]: I0126 16:36:34.807981 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ebcd68-c812-400f-8919-5481ae36d4ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5ebcd68-c812-400f-8919-5481ae36d4ff" (UID: "d5ebcd68-c812-400f-8919-5481ae36d4ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:36:34 crc kubenswrapper[4823]: I0126 16:36:34.882060 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5ebcd68-c812-400f-8919-5481ae36d4ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.013588 4823 generic.go:334] "Generic (PLEG): container finished" podID="d5ebcd68-c812-400f-8919-5481ae36d4ff" containerID="b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540" exitCode=0 Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.013636 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xbkcm" Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.013660 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbkcm" event={"ID":"d5ebcd68-c812-400f-8919-5481ae36d4ff","Type":"ContainerDied","Data":"b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540"} Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.013712 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xbkcm" event={"ID":"d5ebcd68-c812-400f-8919-5481ae36d4ff","Type":"ContainerDied","Data":"d23a93e724934423f9078fc28ec60a64bc82f211e3ca51ad4e3800d716612b29"} Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.013743 4823 scope.go:117] "RemoveContainer" containerID="b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540" Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.048963 4823 scope.go:117] "RemoveContainer" containerID="29f1c29cbe3e3ff34f805c242a3c7e0088240872683019fd165a15d4d3bb3295" Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.057325 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xbkcm"] Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.069579 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xbkcm"] Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.102201 4823 scope.go:117] "RemoveContainer" containerID="c41808cd938de32ca8566b5b2126e57a08123ef2d88ead65cf61e3580a2e2409" Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.127302 4823 scope.go:117] "RemoveContainer" containerID="b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540" Jan 26 16:36:35 crc kubenswrapper[4823]: E0126 16:36:35.127974 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540\": container with ID starting with b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540 not found: ID does not exist" containerID="b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540" Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.128013 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540"} err="failed to get container status \"b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540\": rpc error: code = NotFound desc = could not find container \"b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540\": container with ID starting with b35e9644071b4b638d592e23fbf78fd4f26c6ef859fbf3e84820914709dd8540 not found: ID does not exist" Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.128037 4823 scope.go:117] "RemoveContainer" containerID="29f1c29cbe3e3ff34f805c242a3c7e0088240872683019fd165a15d4d3bb3295" Jan 26 16:36:35 crc kubenswrapper[4823]: E0126 16:36:35.128526 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29f1c29cbe3e3ff34f805c242a3c7e0088240872683019fd165a15d4d3bb3295\": container with ID starting with 29f1c29cbe3e3ff34f805c242a3c7e0088240872683019fd165a15d4d3bb3295 not found: ID does not exist" containerID="29f1c29cbe3e3ff34f805c242a3c7e0088240872683019fd165a15d4d3bb3295" Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.128578 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29f1c29cbe3e3ff34f805c242a3c7e0088240872683019fd165a15d4d3bb3295"} err="failed to get container status \"29f1c29cbe3e3ff34f805c242a3c7e0088240872683019fd165a15d4d3bb3295\": rpc error: code = NotFound desc = could not find container \"29f1c29cbe3e3ff34f805c242a3c7e0088240872683019fd165a15d4d3bb3295\": container with ID starting with 29f1c29cbe3e3ff34f805c242a3c7e0088240872683019fd165a15d4d3bb3295 not found: ID does not exist" Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.128613 4823 scope.go:117] "RemoveContainer" containerID="c41808cd938de32ca8566b5b2126e57a08123ef2d88ead65cf61e3580a2e2409" Jan 26 16:36:35 crc kubenswrapper[4823]: E0126 16:36:35.129175 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c41808cd938de32ca8566b5b2126e57a08123ef2d88ead65cf61e3580a2e2409\": container with ID starting with c41808cd938de32ca8566b5b2126e57a08123ef2d88ead65cf61e3580a2e2409 not found: ID does not exist" containerID="c41808cd938de32ca8566b5b2126e57a08123ef2d88ead65cf61e3580a2e2409" Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.129218 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c41808cd938de32ca8566b5b2126e57a08123ef2d88ead65cf61e3580a2e2409"} err="failed to get container status \"c41808cd938de32ca8566b5b2126e57a08123ef2d88ead65cf61e3580a2e2409\": rpc error: code = NotFound desc = could not find container \"c41808cd938de32ca8566b5b2126e57a08123ef2d88ead65cf61e3580a2e2409\": container with ID starting with c41808cd938de32ca8566b5b2126e57a08123ef2d88ead65cf61e3580a2e2409 not found: ID does not exist" Jan 26 16:36:35 crc kubenswrapper[4823]: I0126 16:36:35.581750 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ebcd68-c812-400f-8919-5481ae36d4ff" path="/var/lib/kubelet/pods/d5ebcd68-c812-400f-8919-5481ae36d4ff/volumes" Jan 26 16:36:37 crc kubenswrapper[4823]: I0126 16:36:37.560711 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:36:37 crc kubenswrapper[4823]: E0126 16:36:37.561103 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:36:51 crc kubenswrapper[4823]: I0126 16:36:51.561198 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:36:51 crc kubenswrapper[4823]: E0126 16:36:51.562793 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:37:04 crc kubenswrapper[4823]: I0126 16:37:04.561459 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:37:04 crc kubenswrapper[4823]: E0126 16:37:04.562300 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.540238 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rzzzq"] Jan 26 16:37:07 crc kubenswrapper[4823]: E0126 16:37:07.541187 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ebcd68-c812-400f-8919-5481ae36d4ff" containerName="extract-content" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.541202 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ebcd68-c812-400f-8919-5481ae36d4ff" containerName="extract-content" Jan 26 16:37:07 crc kubenswrapper[4823]: E0126 16:37:07.541228 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ebcd68-c812-400f-8919-5481ae36d4ff" containerName="extract-utilities" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.541234 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ebcd68-c812-400f-8919-5481ae36d4ff" containerName="extract-utilities" Jan 26 16:37:07 crc kubenswrapper[4823]: E0126 16:37:07.541260 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ebcd68-c812-400f-8919-5481ae36d4ff" containerName="registry-server" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.541267 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ebcd68-c812-400f-8919-5481ae36d4ff" containerName="registry-server" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.541464 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ebcd68-c812-400f-8919-5481ae36d4ff" containerName="registry-server" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.542753 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.583692 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzzzq"] Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.601869 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792b809e-829e-4215-ac7f-e0708da416dc-catalog-content\") pod \"redhat-marketplace-rzzzq\" (UID: \"792b809e-829e-4215-ac7f-e0708da416dc\") " pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.601968 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792b809e-829e-4215-ac7f-e0708da416dc-utilities\") pod \"redhat-marketplace-rzzzq\" (UID: \"792b809e-829e-4215-ac7f-e0708da416dc\") " pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.602170 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9ldx\" (UniqueName: \"kubernetes.io/projected/792b809e-829e-4215-ac7f-e0708da416dc-kube-api-access-s9ldx\") pod \"redhat-marketplace-rzzzq\" (UID: \"792b809e-829e-4215-ac7f-e0708da416dc\") " pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.703886 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9ldx\" (UniqueName: \"kubernetes.io/projected/792b809e-829e-4215-ac7f-e0708da416dc-kube-api-access-s9ldx\") pod \"redhat-marketplace-rzzzq\" (UID: \"792b809e-829e-4215-ac7f-e0708da416dc\") " pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.704033 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792b809e-829e-4215-ac7f-e0708da416dc-catalog-content\") pod \"redhat-marketplace-rzzzq\" (UID: \"792b809e-829e-4215-ac7f-e0708da416dc\") " pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.704067 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792b809e-829e-4215-ac7f-e0708da416dc-utilities\") pod \"redhat-marketplace-rzzzq\" (UID: \"792b809e-829e-4215-ac7f-e0708da416dc\") " pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.704606 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792b809e-829e-4215-ac7f-e0708da416dc-utilities\") pod \"redhat-marketplace-rzzzq\" (UID: \"792b809e-829e-4215-ac7f-e0708da416dc\") " pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.704889 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792b809e-829e-4215-ac7f-e0708da416dc-catalog-content\") pod \"redhat-marketplace-rzzzq\" (UID: \"792b809e-829e-4215-ac7f-e0708da416dc\") " pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.732739 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9ldx\" (UniqueName: \"kubernetes.io/projected/792b809e-829e-4215-ac7f-e0708da416dc-kube-api-access-s9ldx\") pod \"redhat-marketplace-rzzzq\" (UID: \"792b809e-829e-4215-ac7f-e0708da416dc\") " pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:07 crc kubenswrapper[4823]: I0126 16:37:07.882314 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:08 crc kubenswrapper[4823]: I0126 16:37:08.339938 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzzzq"] Jan 26 16:37:08 crc kubenswrapper[4823]: W0126 16:37:08.343548 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod792b809e_829e_4215_ac7f_e0708da416dc.slice/crio-0b21f0f851498422e675f954c953ea15b4aa0f9009d2e2698b94b1a71ecfea23 WatchSource:0}: Error finding container 0b21f0f851498422e675f954c953ea15b4aa0f9009d2e2698b94b1a71ecfea23: Status 404 returned error can't find the container with id 0b21f0f851498422e675f954c953ea15b4aa0f9009d2e2698b94b1a71ecfea23 Jan 26 16:37:08 crc kubenswrapper[4823]: I0126 16:37:08.363557 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzzzq" event={"ID":"792b809e-829e-4215-ac7f-e0708da416dc","Type":"ContainerStarted","Data":"0b21f0f851498422e675f954c953ea15b4aa0f9009d2e2698b94b1a71ecfea23"} Jan 26 16:37:09 crc kubenswrapper[4823]: I0126 16:37:09.376154 4823 generic.go:334] "Generic (PLEG): container finished" podID="792b809e-829e-4215-ac7f-e0708da416dc" containerID="306e5bdeb0e5e5a862322550f045a611814f3e798013b64c3fa2311a07d9d0d1" exitCode=0 Jan 26 16:37:09 crc kubenswrapper[4823]: I0126 16:37:09.376320 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzzzq" event={"ID":"792b809e-829e-4215-ac7f-e0708da416dc","Type":"ContainerDied","Data":"306e5bdeb0e5e5a862322550f045a611814f3e798013b64c3fa2311a07d9d0d1"} Jan 26 16:37:11 crc kubenswrapper[4823]: I0126 16:37:11.401898 4823 generic.go:334] "Generic (PLEG): container finished" podID="792b809e-829e-4215-ac7f-e0708da416dc" containerID="52c7889bf2d995c99bf66af383cf7ad1efd05bd1446a07ac901664371c246a29" exitCode=0 Jan 26 16:37:11 crc kubenswrapper[4823]: I0126 16:37:11.402145 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzzzq" event={"ID":"792b809e-829e-4215-ac7f-e0708da416dc","Type":"ContainerDied","Data":"52c7889bf2d995c99bf66af383cf7ad1efd05bd1446a07ac901664371c246a29"} Jan 26 16:37:12 crc kubenswrapper[4823]: I0126 16:37:12.414548 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzzzq" event={"ID":"792b809e-829e-4215-ac7f-e0708da416dc","Type":"ContainerStarted","Data":"e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d"} Jan 26 16:37:12 crc kubenswrapper[4823]: I0126 16:37:12.433536 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rzzzq" podStartSLOduration=2.987013778 podStartE2EDuration="5.433519294s" podCreationTimestamp="2026-01-26 16:37:07 +0000 UTC" firstStartedPulling="2026-01-26 16:37:09.378641814 +0000 UTC m=+6626.064104929" lastFinishedPulling="2026-01-26 16:37:11.82514734 +0000 UTC m=+6628.510610445" observedRunningTime="2026-01-26 16:37:12.431935541 +0000 UTC m=+6629.117398656" watchObservedRunningTime="2026-01-26 16:37:12.433519294 +0000 UTC m=+6629.118982399" Jan 26 16:37:17 crc kubenswrapper[4823]: I0126 16:37:17.883110 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:17 crc kubenswrapper[4823]: I0126 16:37:17.883687 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:17 crc kubenswrapper[4823]: I0126 16:37:17.963303 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:18 crc kubenswrapper[4823]: I0126 16:37:18.557568 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:18 crc kubenswrapper[4823]: I0126 16:37:18.561673 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:37:18 crc kubenswrapper[4823]: E0126 16:37:18.561902 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:37:18 crc kubenswrapper[4823]: I0126 16:37:18.619288 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzzzq"] Jan 26 16:37:20 crc kubenswrapper[4823]: I0126 16:37:20.523794 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rzzzq" podUID="792b809e-829e-4215-ac7f-e0708da416dc" containerName="registry-server" containerID="cri-o://e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d" gracePeriod=2 Jan 26 16:37:20 crc kubenswrapper[4823]: I0126 16:37:20.964568 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:20 crc kubenswrapper[4823]: I0126 16:37:20.992277 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792b809e-829e-4215-ac7f-e0708da416dc-utilities\") pod \"792b809e-829e-4215-ac7f-e0708da416dc\" (UID: \"792b809e-829e-4215-ac7f-e0708da416dc\") " Jan 26 16:37:20 crc kubenswrapper[4823]: I0126 16:37:20.992616 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792b809e-829e-4215-ac7f-e0708da416dc-catalog-content\") pod \"792b809e-829e-4215-ac7f-e0708da416dc\" (UID: \"792b809e-829e-4215-ac7f-e0708da416dc\") " Jan 26 16:37:20 crc kubenswrapper[4823]: I0126 16:37:20.992683 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9ldx\" (UniqueName: \"kubernetes.io/projected/792b809e-829e-4215-ac7f-e0708da416dc-kube-api-access-s9ldx\") pod \"792b809e-829e-4215-ac7f-e0708da416dc\" (UID: \"792b809e-829e-4215-ac7f-e0708da416dc\") " Jan 26 16:37:20 crc kubenswrapper[4823]: I0126 16:37:20.996416 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792b809e-829e-4215-ac7f-e0708da416dc-utilities" (OuterVolumeSpecName: "utilities") pod "792b809e-829e-4215-ac7f-e0708da416dc" (UID: "792b809e-829e-4215-ac7f-e0708da416dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.001880 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792b809e-829e-4215-ac7f-e0708da416dc-kube-api-access-s9ldx" (OuterVolumeSpecName: "kube-api-access-s9ldx") pod "792b809e-829e-4215-ac7f-e0708da416dc" (UID: "792b809e-829e-4215-ac7f-e0708da416dc"). InnerVolumeSpecName "kube-api-access-s9ldx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.022327 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792b809e-829e-4215-ac7f-e0708da416dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "792b809e-829e-4215-ac7f-e0708da416dc" (UID: "792b809e-829e-4215-ac7f-e0708da416dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.095000 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9ldx\" (UniqueName: \"kubernetes.io/projected/792b809e-829e-4215-ac7f-e0708da416dc-kube-api-access-s9ldx\") on node \"crc\" DevicePath \"\"" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.095341 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792b809e-829e-4215-ac7f-e0708da416dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.095454 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792b809e-829e-4215-ac7f-e0708da416dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.535107 4823 generic.go:334] "Generic (PLEG): container finished" podID="792b809e-829e-4215-ac7f-e0708da416dc" containerID="e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d" exitCode=0 Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.535269 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzzzq" event={"ID":"792b809e-829e-4215-ac7f-e0708da416dc","Type":"ContainerDied","Data":"e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d"} Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.535657 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rzzzq" event={"ID":"792b809e-829e-4215-ac7f-e0708da416dc","Type":"ContainerDied","Data":"0b21f0f851498422e675f954c953ea15b4aa0f9009d2e2698b94b1a71ecfea23"} Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.535681 4823 scope.go:117] "RemoveContainer" containerID="e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.535457 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rzzzq" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.560321 4823 scope.go:117] "RemoveContainer" containerID="52c7889bf2d995c99bf66af383cf7ad1efd05bd1446a07ac901664371c246a29" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.575638 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzzzq"] Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.587124 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rzzzq"] Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.591831 4823 scope.go:117] "RemoveContainer" containerID="306e5bdeb0e5e5a862322550f045a611814f3e798013b64c3fa2311a07d9d0d1" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.625926 4823 scope.go:117] "RemoveContainer" containerID="e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d" Jan 26 16:37:21 crc kubenswrapper[4823]: E0126 16:37:21.626534 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d\": container with ID starting with e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d not found: ID does not exist" containerID="e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.626674 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d"} err="failed to get container status \"e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d\": rpc error: code = NotFound desc = could not find container \"e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d\": container with ID starting with e81f3546f757cb15dc319785fe7943df76a2cb04b1c1966badf97922e3d00a5d not found: ID does not exist" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.626780 4823 scope.go:117] "RemoveContainer" containerID="52c7889bf2d995c99bf66af383cf7ad1efd05bd1446a07ac901664371c246a29" Jan 26 16:37:21 crc kubenswrapper[4823]: E0126 16:37:21.627270 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52c7889bf2d995c99bf66af383cf7ad1efd05bd1446a07ac901664371c246a29\": container with ID starting with 52c7889bf2d995c99bf66af383cf7ad1efd05bd1446a07ac901664371c246a29 not found: ID does not exist" containerID="52c7889bf2d995c99bf66af383cf7ad1efd05bd1446a07ac901664371c246a29" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.627308 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52c7889bf2d995c99bf66af383cf7ad1efd05bd1446a07ac901664371c246a29"} err="failed to get container status \"52c7889bf2d995c99bf66af383cf7ad1efd05bd1446a07ac901664371c246a29\": rpc error: code = NotFound desc = could not find container \"52c7889bf2d995c99bf66af383cf7ad1efd05bd1446a07ac901664371c246a29\": container with ID starting with 52c7889bf2d995c99bf66af383cf7ad1efd05bd1446a07ac901664371c246a29 not found: ID does not exist" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.627340 4823 scope.go:117] "RemoveContainer" containerID="306e5bdeb0e5e5a862322550f045a611814f3e798013b64c3fa2311a07d9d0d1" Jan 26 16:37:21 crc kubenswrapper[4823]: E0126 16:37:21.627730 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"306e5bdeb0e5e5a862322550f045a611814f3e798013b64c3fa2311a07d9d0d1\": container with ID starting with 306e5bdeb0e5e5a862322550f045a611814f3e798013b64c3fa2311a07d9d0d1 not found: ID does not exist" containerID="306e5bdeb0e5e5a862322550f045a611814f3e798013b64c3fa2311a07d9d0d1" Jan 26 16:37:21 crc kubenswrapper[4823]: I0126 16:37:21.627856 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"306e5bdeb0e5e5a862322550f045a611814f3e798013b64c3fa2311a07d9d0d1"} err="failed to get container status \"306e5bdeb0e5e5a862322550f045a611814f3e798013b64c3fa2311a07d9d0d1\": rpc error: code = NotFound desc = could not find container \"306e5bdeb0e5e5a862322550f045a611814f3e798013b64c3fa2311a07d9d0d1\": container with ID starting with 306e5bdeb0e5e5a862322550f045a611814f3e798013b64c3fa2311a07d9d0d1 not found: ID does not exist" Jan 26 16:37:23 crc kubenswrapper[4823]: I0126 16:37:23.570907 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="792b809e-829e-4215-ac7f-e0708da416dc" path="/var/lib/kubelet/pods/792b809e-829e-4215-ac7f-e0708da416dc/volumes" Jan 26 16:37:30 crc kubenswrapper[4823]: I0126 16:37:30.560555 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:37:30 crc kubenswrapper[4823]: E0126 16:37:30.561540 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:37:42 crc kubenswrapper[4823]: I0126 16:37:42.560573 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:37:42 crc kubenswrapper[4823]: E0126 16:37:42.561671 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:37:54 crc kubenswrapper[4823]: I0126 16:37:54.561253 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:37:54 crc kubenswrapper[4823]: E0126 16:37:54.562979 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:38:06 crc kubenswrapper[4823]: I0126 16:38:06.560867 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:38:06 crc kubenswrapper[4823]: E0126 16:38:06.561598 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:38:07 crc kubenswrapper[4823]: I0126 16:38:07.390874 4823 generic.go:334] "Generic (PLEG): container finished" podID="61b86bcd-b461-4d98-b3ab-67a1fd95eddc" containerID="03031f840003d5582c94869c66075c870ebc0fe73eddc2793529f3432a01f8dc" exitCode=0 Jan 26 16:38:07 crc kubenswrapper[4823]: I0126 16:38:07.391146 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"61b86bcd-b461-4d98-b3ab-67a1fd95eddc","Type":"ContainerDied","Data":"03031f840003d5582c94869c66075c870ebc0fe73eddc2793529f3432a01f8dc"} Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.839247 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.861547 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-test-operator-ephemeral-workdir\") pod \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.861664 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ssh-key\") pod \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.861698 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ca-certs\") pod \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.861808 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ceph\") pod \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.861863 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-test-operator-ephemeral-temporary\") pod \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.861887 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-config-data\") pod \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.861906 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.861930 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-openstack-config\") pod \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.862013 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4bwq\" (UniqueName: \"kubernetes.io/projected/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-kube-api-access-q4bwq\") pod \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.862034 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-openstack-config-secret\") pod \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\" (UID: \"61b86bcd-b461-4d98-b3ab-67a1fd95eddc\") " Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.862761 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "61b86bcd-b461-4d98-b3ab-67a1fd95eddc" (UID: "61b86bcd-b461-4d98-b3ab-67a1fd95eddc"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.863643 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-config-data" (OuterVolumeSpecName: "config-data") pod "61b86bcd-b461-4d98-b3ab-67a1fd95eddc" (UID: "61b86bcd-b461-4d98-b3ab-67a1fd95eddc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.868747 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "test-operator-logs") pod "61b86bcd-b461-4d98-b3ab-67a1fd95eddc" (UID: "61b86bcd-b461-4d98-b3ab-67a1fd95eddc"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.870093 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ceph" (OuterVolumeSpecName: "ceph") pod "61b86bcd-b461-4d98-b3ab-67a1fd95eddc" (UID: "61b86bcd-b461-4d98-b3ab-67a1fd95eddc"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.872498 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "61b86bcd-b461-4d98-b3ab-67a1fd95eddc" (UID: "61b86bcd-b461-4d98-b3ab-67a1fd95eddc"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.880657 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-kube-api-access-q4bwq" (OuterVolumeSpecName: "kube-api-access-q4bwq") pod "61b86bcd-b461-4d98-b3ab-67a1fd95eddc" (UID: "61b86bcd-b461-4d98-b3ab-67a1fd95eddc"). InnerVolumeSpecName "kube-api-access-q4bwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.903917 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "61b86bcd-b461-4d98-b3ab-67a1fd95eddc" (UID: "61b86bcd-b461-4d98-b3ab-67a1fd95eddc"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.912447 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "61b86bcd-b461-4d98-b3ab-67a1fd95eddc" (UID: "61b86bcd-b461-4d98-b3ab-67a1fd95eddc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.917603 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "61b86bcd-b461-4d98-b3ab-67a1fd95eddc" (UID: "61b86bcd-b461-4d98-b3ab-67a1fd95eddc"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.964616 4823 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.964689 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.964757 4823 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.964770 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4bwq\" (UniqueName: \"kubernetes.io/projected/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-kube-api-access-q4bwq\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.964804 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.964818 4823 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.964832 4823 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.964843 4823 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.964855 4823 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-ceph\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.966617 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "61b86bcd-b461-4d98-b3ab-67a1fd95eddc" (UID: "61b86bcd-b461-4d98-b3ab-67a1fd95eddc"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:38:08 crc kubenswrapper[4823]: I0126 16:38:08.987521 4823 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 26 16:38:09 crc kubenswrapper[4823]: I0126 16:38:09.066779 4823 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:09 crc kubenswrapper[4823]: I0126 16:38:09.066820 4823 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/61b86bcd-b461-4d98-b3ab-67a1fd95eddc-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 26 16:38:09 crc kubenswrapper[4823]: I0126 16:38:09.408697 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-test" event={"ID":"61b86bcd-b461-4d98-b3ab-67a1fd95eddc","Type":"ContainerDied","Data":"44cdefd3872edf91d8ab5fedc5a442b1cce56e31ac7cf6b5c912fbe8311981d0"} Jan 26 16:38:09 crc kubenswrapper[4823]: I0126 16:38:09.408917 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44cdefd3872edf91d8ab5fedc5a442b1cce56e31ac7cf6b5c912fbe8311981d0" Jan 26 16:38:09 crc kubenswrapper[4823]: I0126 16:38:09.408752 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-test" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.485150 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 16:38:11 crc kubenswrapper[4823]: E0126 16:38:11.485908 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792b809e-829e-4215-ac7f-e0708da416dc" containerName="extract-utilities" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.485925 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="792b809e-829e-4215-ac7f-e0708da416dc" containerName="extract-utilities" Jan 26 16:38:11 crc kubenswrapper[4823]: E0126 16:38:11.485941 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792b809e-829e-4215-ac7f-e0708da416dc" containerName="registry-server" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.485949 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="792b809e-829e-4215-ac7f-e0708da416dc" containerName="registry-server" Jan 26 16:38:11 crc kubenswrapper[4823]: E0126 16:38:11.485992 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792b809e-829e-4215-ac7f-e0708da416dc" containerName="extract-content" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.486004 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="792b809e-829e-4215-ac7f-e0708da416dc" containerName="extract-content" Jan 26 16:38:11 crc kubenswrapper[4823]: E0126 16:38:11.486017 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61b86bcd-b461-4d98-b3ab-67a1fd95eddc" containerName="tempest-tests-tempest-tests-runner" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.486027 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="61b86bcd-b461-4d98-b3ab-67a1fd95eddc" containerName="tempest-tests-tempest-tests-runner" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.486278 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="792b809e-829e-4215-ac7f-e0708da416dc" containerName="registry-server" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.486313 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="61b86bcd-b461-4d98-b3ab-67a1fd95eddc" containerName="tempest-tests-tempest-tests-runner" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.487283 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.490908 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hmpcx" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.499474 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.617797 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"288cc5ba-6f03-4b43-aa8a-840ab47267a4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.618024 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmjg4\" (UniqueName: \"kubernetes.io/projected/288cc5ba-6f03-4b43-aa8a-840ab47267a4-kube-api-access-hmjg4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"288cc5ba-6f03-4b43-aa8a-840ab47267a4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.719652 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmjg4\" (UniqueName: \"kubernetes.io/projected/288cc5ba-6f03-4b43-aa8a-840ab47267a4-kube-api-access-hmjg4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"288cc5ba-6f03-4b43-aa8a-840ab47267a4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.719874 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"288cc5ba-6f03-4b43-aa8a-840ab47267a4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.720778 4823 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"288cc5ba-6f03-4b43-aa8a-840ab47267a4\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.742061 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmjg4\" (UniqueName: \"kubernetes.io/projected/288cc5ba-6f03-4b43-aa8a-840ab47267a4-kube-api-access-hmjg4\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"288cc5ba-6f03-4b43-aa8a-840ab47267a4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.759436 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"288cc5ba-6f03-4b43-aa8a-840ab47267a4\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:38:11 crc kubenswrapper[4823]: I0126 16:38:11.811441 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 26 16:38:12 crc kubenswrapper[4823]: I0126 16:38:12.285516 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 26 16:38:12 crc kubenswrapper[4823]: I0126 16:38:12.297636 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:38:12 crc kubenswrapper[4823]: I0126 16:38:12.438665 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"288cc5ba-6f03-4b43-aa8a-840ab47267a4","Type":"ContainerStarted","Data":"d8f4fe43ac9750254a11373d130e567aceedbdd98a6b3b2e3d4e1662737d794d"} Jan 26 16:38:13 crc kubenswrapper[4823]: I0126 16:38:13.451612 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"288cc5ba-6f03-4b43-aa8a-840ab47267a4","Type":"ContainerStarted","Data":"6892b0323d8efae0824677ddf4b0a7f25bddc4383907cfd7fbe8ec5eac94cae4"} Jan 26 16:38:13 crc kubenswrapper[4823]: I0126 16:38:13.477249 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.6976913649999998 podStartE2EDuration="2.477229851s" podCreationTimestamp="2026-01-26 16:38:11 +0000 UTC" firstStartedPulling="2026-01-26 16:38:12.297327101 +0000 UTC m=+6688.982790206" lastFinishedPulling="2026-01-26 16:38:13.076865577 +0000 UTC m=+6689.762328692" observedRunningTime="2026-01-26 16:38:13.471852363 +0000 UTC m=+6690.157315468" watchObservedRunningTime="2026-01-26 16:38:13.477229851 +0000 UTC m=+6690.162692956" Jan 26 16:38:21 crc kubenswrapper[4823]: I0126 16:38:21.560228 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:38:21 crc kubenswrapper[4823]: E0126 16:38:21.560973 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:38:32 crc kubenswrapper[4823]: I0126 16:38:32.560422 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:38:32 crc kubenswrapper[4823]: E0126 16:38:32.561609 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:38:47 crc kubenswrapper[4823]: I0126 16:38:47.561158 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:38:47 crc kubenswrapper[4823]: E0126 16:38:47.561950 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:38:59 crc kubenswrapper[4823]: I0126 16:38:59.561881 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:38:59 crc kubenswrapper[4823]: E0126 16:38:59.562880 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:39:11 crc kubenswrapper[4823]: I0126 16:39:11.561081 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:39:11 crc kubenswrapper[4823]: E0126 16:39:11.572870 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:39:25 crc kubenswrapper[4823]: I0126 16:39:25.561200 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:39:25 crc kubenswrapper[4823]: E0126 16:39:25.561955 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:39:36 crc kubenswrapper[4823]: I0126 16:39:36.560629 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:39:36 crc kubenswrapper[4823]: E0126 16:39:36.561414 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:39:48 crc kubenswrapper[4823]: I0126 16:39:48.562097 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:39:48 crc kubenswrapper[4823]: E0126 16:39:48.563103 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:39:59 crc kubenswrapper[4823]: I0126 16:39:59.560051 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:39:59 crc kubenswrapper[4823]: E0126 16:39:59.561013 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:40:11 crc kubenswrapper[4823]: I0126 16:40:11.560391 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:40:12 crc kubenswrapper[4823]: I0126 16:40:12.578301 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"93fe1595a72846e027d18522cd5d122ac0ce13194f8ae8b8cfebd250e5574b1f"} Jan 26 16:42:34 crc kubenswrapper[4823]: I0126 16:42:34.508864 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:42:34 crc kubenswrapper[4823]: I0126 16:42:34.509629 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:43:04 crc kubenswrapper[4823]: I0126 16:43:04.507821 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:43:04 crc kubenswrapper[4823]: I0126 16:43:04.509313 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.511179 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-f7lg6"] Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.513516 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.523324 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f7lg6"] Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.692108 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tmbv\" (UniqueName: \"kubernetes.io/projected/b084919f-4a7e-4abc-aeb4-d9871de0ab15-kube-api-access-8tmbv\") pod \"certified-operators-f7lg6\" (UID: \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\") " pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.692386 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b084919f-4a7e-4abc-aeb4-d9871de0ab15-utilities\") pod \"certified-operators-f7lg6\" (UID: \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\") " pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.692452 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b084919f-4a7e-4abc-aeb4-d9871de0ab15-catalog-content\") pod \"certified-operators-f7lg6\" (UID: \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\") " pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.702257 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ncxq4"] Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.704471 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.719059 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ncxq4"] Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.794234 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b084919f-4a7e-4abc-aeb4-d9871de0ab15-utilities\") pod \"certified-operators-f7lg6\" (UID: \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\") " pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.794299 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b084919f-4a7e-4abc-aeb4-d9871de0ab15-catalog-content\") pod \"certified-operators-f7lg6\" (UID: \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\") " pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.794337 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tmbv\" (UniqueName: \"kubernetes.io/projected/b084919f-4a7e-4abc-aeb4-d9871de0ab15-kube-api-access-8tmbv\") pod \"certified-operators-f7lg6\" (UID: \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\") " pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.794894 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b084919f-4a7e-4abc-aeb4-d9871de0ab15-utilities\") pod \"certified-operators-f7lg6\" (UID: \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\") " pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.795401 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b084919f-4a7e-4abc-aeb4-d9871de0ab15-catalog-content\") pod \"certified-operators-f7lg6\" (UID: \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\") " pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.818844 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tmbv\" (UniqueName: \"kubernetes.io/projected/b084919f-4a7e-4abc-aeb4-d9871de0ab15-kube-api-access-8tmbv\") pod \"certified-operators-f7lg6\" (UID: \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\") " pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.849481 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.896076 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qrbt\" (UniqueName: \"kubernetes.io/projected/678d51af-fa3d-47d9-80e8-ce85ad72a386-kube-api-access-8qrbt\") pod \"community-operators-ncxq4\" (UID: \"678d51af-fa3d-47d9-80e8-ce85ad72a386\") " pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.896880 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678d51af-fa3d-47d9-80e8-ce85ad72a386-catalog-content\") pod \"community-operators-ncxq4\" (UID: \"678d51af-fa3d-47d9-80e8-ce85ad72a386\") " pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:05 crc kubenswrapper[4823]: I0126 16:43:05.897100 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678d51af-fa3d-47d9-80e8-ce85ad72a386-utilities\") pod \"community-operators-ncxq4\" (UID: \"678d51af-fa3d-47d9-80e8-ce85ad72a386\") " pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:06 crc kubenswrapper[4823]: I0126 16:43:05.999063 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678d51af-fa3d-47d9-80e8-ce85ad72a386-utilities\") pod \"community-operators-ncxq4\" (UID: \"678d51af-fa3d-47d9-80e8-ce85ad72a386\") " pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:06 crc kubenswrapper[4823]: I0126 16:43:05.999137 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qrbt\" (UniqueName: \"kubernetes.io/projected/678d51af-fa3d-47d9-80e8-ce85ad72a386-kube-api-access-8qrbt\") pod \"community-operators-ncxq4\" (UID: \"678d51af-fa3d-47d9-80e8-ce85ad72a386\") " pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:06 crc kubenswrapper[4823]: I0126 16:43:05.999204 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678d51af-fa3d-47d9-80e8-ce85ad72a386-catalog-content\") pod \"community-operators-ncxq4\" (UID: \"678d51af-fa3d-47d9-80e8-ce85ad72a386\") " pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:06 crc kubenswrapper[4823]: I0126 16:43:05.999673 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678d51af-fa3d-47d9-80e8-ce85ad72a386-catalog-content\") pod \"community-operators-ncxq4\" (UID: \"678d51af-fa3d-47d9-80e8-ce85ad72a386\") " pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:06 crc kubenswrapper[4823]: I0126 16:43:05.999676 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678d51af-fa3d-47d9-80e8-ce85ad72a386-utilities\") pod \"community-operators-ncxq4\" (UID: \"678d51af-fa3d-47d9-80e8-ce85ad72a386\") " pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:06 crc kubenswrapper[4823]: I0126 16:43:06.027455 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qrbt\" (UniqueName: \"kubernetes.io/projected/678d51af-fa3d-47d9-80e8-ce85ad72a386-kube-api-access-8qrbt\") pod \"community-operators-ncxq4\" (UID: \"678d51af-fa3d-47d9-80e8-ce85ad72a386\") " pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:06 crc kubenswrapper[4823]: I0126 16:43:06.320178 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:06 crc kubenswrapper[4823]: I0126 16:43:06.424441 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-f7lg6"] Jan 26 16:43:06 crc kubenswrapper[4823]: W0126 16:43:06.834945 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod678d51af_fa3d_47d9_80e8_ce85ad72a386.slice/crio-2a55a219af263b375211c1150d0db43eb185a09d9699c5bd82cbcfd50db48d73 WatchSource:0}: Error finding container 2a55a219af263b375211c1150d0db43eb185a09d9699c5bd82cbcfd50db48d73: Status 404 returned error can't find the container with id 2a55a219af263b375211c1150d0db43eb185a09d9699c5bd82cbcfd50db48d73 Jan 26 16:43:06 crc kubenswrapper[4823]: I0126 16:43:06.837813 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ncxq4"] Jan 26 16:43:07 crc kubenswrapper[4823]: I0126 16:43:07.263129 4823 generic.go:334] "Generic (PLEG): container finished" podID="b084919f-4a7e-4abc-aeb4-d9871de0ab15" containerID="7010eb57b128039617660b4607119ef3c95d1eec54a2b6dafca74efd31c2adc5" exitCode=0 Jan 26 16:43:07 crc kubenswrapper[4823]: I0126 16:43:07.263203 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7lg6" event={"ID":"b084919f-4a7e-4abc-aeb4-d9871de0ab15","Type":"ContainerDied","Data":"7010eb57b128039617660b4607119ef3c95d1eec54a2b6dafca74efd31c2adc5"} Jan 26 16:43:07 crc kubenswrapper[4823]: I0126 16:43:07.263231 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7lg6" event={"ID":"b084919f-4a7e-4abc-aeb4-d9871de0ab15","Type":"ContainerStarted","Data":"b7a0977adf031de30e49d9ef7578172d947e34ee2c7ba8332779900019b156e3"} Jan 26 16:43:07 crc kubenswrapper[4823]: I0126 16:43:07.271240 4823 generic.go:334] "Generic (PLEG): container finished" podID="678d51af-fa3d-47d9-80e8-ce85ad72a386" containerID="b630f889110c7300c9da890a8eaa02793b3622cc521cd761c608e5ce064a8b93" exitCode=0 Jan 26 16:43:07 crc kubenswrapper[4823]: I0126 16:43:07.271289 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxq4" event={"ID":"678d51af-fa3d-47d9-80e8-ce85ad72a386","Type":"ContainerDied","Data":"b630f889110c7300c9da890a8eaa02793b3622cc521cd761c608e5ce064a8b93"} Jan 26 16:43:07 crc kubenswrapper[4823]: I0126 16:43:07.271317 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxq4" event={"ID":"678d51af-fa3d-47d9-80e8-ce85ad72a386","Type":"ContainerStarted","Data":"2a55a219af263b375211c1150d0db43eb185a09d9699c5bd82cbcfd50db48d73"} Jan 26 16:43:08 crc kubenswrapper[4823]: I0126 16:43:08.279670 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7lg6" event={"ID":"b084919f-4a7e-4abc-aeb4-d9871de0ab15","Type":"ContainerStarted","Data":"0a2d860b18d176031b5e3184b174fbb259762ab5387792cd9590de5c3f2ac921"} Jan 26 16:43:08 crc kubenswrapper[4823]: I0126 16:43:08.281944 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxq4" event={"ID":"678d51af-fa3d-47d9-80e8-ce85ad72a386","Type":"ContainerStarted","Data":"5db07e180f7a04ea6448299d8fb1c035194bbba3439fae7acbf754763e9011ef"} Jan 26 16:43:09 crc kubenswrapper[4823]: I0126 16:43:09.308860 4823 generic.go:334] "Generic (PLEG): container finished" podID="678d51af-fa3d-47d9-80e8-ce85ad72a386" containerID="5db07e180f7a04ea6448299d8fb1c035194bbba3439fae7acbf754763e9011ef" exitCode=0 Jan 26 16:43:09 crc kubenswrapper[4823]: I0126 16:43:09.309000 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxq4" event={"ID":"678d51af-fa3d-47d9-80e8-ce85ad72a386","Type":"ContainerDied","Data":"5db07e180f7a04ea6448299d8fb1c035194bbba3439fae7acbf754763e9011ef"} Jan 26 16:43:09 crc kubenswrapper[4823]: I0126 16:43:09.312379 4823 generic.go:334] "Generic (PLEG): container finished" podID="b084919f-4a7e-4abc-aeb4-d9871de0ab15" containerID="0a2d860b18d176031b5e3184b174fbb259762ab5387792cd9590de5c3f2ac921" exitCode=0 Jan 26 16:43:09 crc kubenswrapper[4823]: I0126 16:43:09.312438 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7lg6" event={"ID":"b084919f-4a7e-4abc-aeb4-d9871de0ab15","Type":"ContainerDied","Data":"0a2d860b18d176031b5e3184b174fbb259762ab5387792cd9590de5c3f2ac921"} Jan 26 16:43:10 crc kubenswrapper[4823]: I0126 16:43:10.331891 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7lg6" event={"ID":"b084919f-4a7e-4abc-aeb4-d9871de0ab15","Type":"ContainerStarted","Data":"a9c7f72631a0f62b58d245b4d6e4e60abef99364aad3c612a10e37d444da40d7"} Jan 26 16:43:10 crc kubenswrapper[4823]: I0126 16:43:10.341252 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxq4" event={"ID":"678d51af-fa3d-47d9-80e8-ce85ad72a386","Type":"ContainerStarted","Data":"3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae"} Jan 26 16:43:10 crc kubenswrapper[4823]: I0126 16:43:10.357694 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-f7lg6" podStartSLOduration=2.9042569289999998 podStartE2EDuration="5.357680295s" podCreationTimestamp="2026-01-26 16:43:05 +0000 UTC" firstStartedPulling="2026-01-26 16:43:07.26542349 +0000 UTC m=+6983.950886595" lastFinishedPulling="2026-01-26 16:43:09.718846856 +0000 UTC m=+6986.404309961" observedRunningTime="2026-01-26 16:43:10.352833052 +0000 UTC m=+6987.038296167" watchObservedRunningTime="2026-01-26 16:43:10.357680295 +0000 UTC m=+6987.043143390" Jan 26 16:43:10 crc kubenswrapper[4823]: I0126 16:43:10.382191 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ncxq4" podStartSLOduration=2.878093752 podStartE2EDuration="5.382170566s" podCreationTimestamp="2026-01-26 16:43:05 +0000 UTC" firstStartedPulling="2026-01-26 16:43:07.273435 +0000 UTC m=+6983.958898105" lastFinishedPulling="2026-01-26 16:43:09.777511814 +0000 UTC m=+6986.462974919" observedRunningTime="2026-01-26 16:43:10.370579788 +0000 UTC m=+6987.056042893" watchObservedRunningTime="2026-01-26 16:43:10.382170566 +0000 UTC m=+6987.067633671" Jan 26 16:43:15 crc kubenswrapper[4823]: I0126 16:43:15.850501 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:15 crc kubenswrapper[4823]: I0126 16:43:15.851122 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:15 crc kubenswrapper[4823]: I0126 16:43:15.906699 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:16 crc kubenswrapper[4823]: I0126 16:43:16.321080 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:16 crc kubenswrapper[4823]: I0126 16:43:16.321155 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:16 crc kubenswrapper[4823]: I0126 16:43:16.374062 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:16 crc kubenswrapper[4823]: I0126 16:43:16.444135 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:16 crc kubenswrapper[4823]: I0126 16:43:16.447443 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:17 crc kubenswrapper[4823]: I0126 16:43:17.750624 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ncxq4"] Jan 26 16:43:18 crc kubenswrapper[4823]: I0126 16:43:18.409884 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ncxq4" podUID="678d51af-fa3d-47d9-80e8-ce85ad72a386" containerName="registry-server" containerID="cri-o://3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae" gracePeriod=2 Jan 26 16:43:18 crc kubenswrapper[4823]: I0126 16:43:18.754442 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f7lg6"] Jan 26 16:43:18 crc kubenswrapper[4823]: I0126 16:43:18.754944 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-f7lg6" podUID="b084919f-4a7e-4abc-aeb4-d9871de0ab15" containerName="registry-server" containerID="cri-o://a9c7f72631a0f62b58d245b4d6e4e60abef99364aad3c612a10e37d444da40d7" gracePeriod=2 Jan 26 16:43:18 crc kubenswrapper[4823]: I0126 16:43:18.850428 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:18 crc kubenswrapper[4823]: I0126 16:43:18.961664 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qrbt\" (UniqueName: \"kubernetes.io/projected/678d51af-fa3d-47d9-80e8-ce85ad72a386-kube-api-access-8qrbt\") pod \"678d51af-fa3d-47d9-80e8-ce85ad72a386\" (UID: \"678d51af-fa3d-47d9-80e8-ce85ad72a386\") " Jan 26 16:43:18 crc kubenswrapper[4823]: I0126 16:43:18.961971 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678d51af-fa3d-47d9-80e8-ce85ad72a386-catalog-content\") pod \"678d51af-fa3d-47d9-80e8-ce85ad72a386\" (UID: \"678d51af-fa3d-47d9-80e8-ce85ad72a386\") " Jan 26 16:43:18 crc kubenswrapper[4823]: I0126 16:43:18.962001 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678d51af-fa3d-47d9-80e8-ce85ad72a386-utilities\") pod \"678d51af-fa3d-47d9-80e8-ce85ad72a386\" (UID: \"678d51af-fa3d-47d9-80e8-ce85ad72a386\") " Jan 26 16:43:18 crc kubenswrapper[4823]: I0126 16:43:18.963185 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/678d51af-fa3d-47d9-80e8-ce85ad72a386-utilities" (OuterVolumeSpecName: "utilities") pod "678d51af-fa3d-47d9-80e8-ce85ad72a386" (UID: "678d51af-fa3d-47d9-80e8-ce85ad72a386"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:43:18 crc kubenswrapper[4823]: I0126 16:43:18.968634 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/678d51af-fa3d-47d9-80e8-ce85ad72a386-kube-api-access-8qrbt" (OuterVolumeSpecName: "kube-api-access-8qrbt") pod "678d51af-fa3d-47d9-80e8-ce85ad72a386" (UID: "678d51af-fa3d-47d9-80e8-ce85ad72a386"). InnerVolumeSpecName "kube-api-access-8qrbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.064641 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qrbt\" (UniqueName: \"kubernetes.io/projected/678d51af-fa3d-47d9-80e8-ce85ad72a386-kube-api-access-8qrbt\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.064677 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/678d51af-fa3d-47d9-80e8-ce85ad72a386-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.347156 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/678d51af-fa3d-47d9-80e8-ce85ad72a386-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "678d51af-fa3d-47d9-80e8-ce85ad72a386" (UID: "678d51af-fa3d-47d9-80e8-ce85ad72a386"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.370001 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/678d51af-fa3d-47d9-80e8-ce85ad72a386-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.421968 4823 generic.go:334] "Generic (PLEG): container finished" podID="678d51af-fa3d-47d9-80e8-ce85ad72a386" containerID="3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae" exitCode=0 Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.422018 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxq4" event={"ID":"678d51af-fa3d-47d9-80e8-ce85ad72a386","Type":"ContainerDied","Data":"3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae"} Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.422026 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ncxq4" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.422046 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ncxq4" event={"ID":"678d51af-fa3d-47d9-80e8-ce85ad72a386","Type":"ContainerDied","Data":"2a55a219af263b375211c1150d0db43eb185a09d9699c5bd82cbcfd50db48d73"} Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.422065 4823 scope.go:117] "RemoveContainer" containerID="3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.443193 4823 scope.go:117] "RemoveContainer" containerID="5db07e180f7a04ea6448299d8fb1c035194bbba3439fae7acbf754763e9011ef" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.459593 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ncxq4"] Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.468446 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ncxq4"] Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.479357 4823 scope.go:117] "RemoveContainer" containerID="b630f889110c7300c9da890a8eaa02793b3622cc521cd761c608e5ce064a8b93" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.497502 4823 scope.go:117] "RemoveContainer" containerID="3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae" Jan 26 16:43:19 crc kubenswrapper[4823]: E0126 16:43:19.497935 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae\": container with ID starting with 3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae not found: ID does not exist" containerID="3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.497973 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae"} err="failed to get container status \"3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae\": rpc error: code = NotFound desc = could not find container \"3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae\": container with ID starting with 3712860e8fd8977071dfb867be54d873434b266c07d39b7835c37df06b17d1ae not found: ID does not exist" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.497999 4823 scope.go:117] "RemoveContainer" containerID="5db07e180f7a04ea6448299d8fb1c035194bbba3439fae7acbf754763e9011ef" Jan 26 16:43:19 crc kubenswrapper[4823]: E0126 16:43:19.498347 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db07e180f7a04ea6448299d8fb1c035194bbba3439fae7acbf754763e9011ef\": container with ID starting with 5db07e180f7a04ea6448299d8fb1c035194bbba3439fae7acbf754763e9011ef not found: ID does not exist" containerID="5db07e180f7a04ea6448299d8fb1c035194bbba3439fae7acbf754763e9011ef" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.498407 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db07e180f7a04ea6448299d8fb1c035194bbba3439fae7acbf754763e9011ef"} err="failed to get container status \"5db07e180f7a04ea6448299d8fb1c035194bbba3439fae7acbf754763e9011ef\": rpc error: code = NotFound desc = could not find container \"5db07e180f7a04ea6448299d8fb1c035194bbba3439fae7acbf754763e9011ef\": container with ID starting with 5db07e180f7a04ea6448299d8fb1c035194bbba3439fae7acbf754763e9011ef not found: ID does not exist" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.498438 4823 scope.go:117] "RemoveContainer" containerID="b630f889110c7300c9da890a8eaa02793b3622cc521cd761c608e5ce064a8b93" Jan 26 16:43:19 crc kubenswrapper[4823]: E0126 16:43:19.498789 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b630f889110c7300c9da890a8eaa02793b3622cc521cd761c608e5ce064a8b93\": container with ID starting with b630f889110c7300c9da890a8eaa02793b3622cc521cd761c608e5ce064a8b93 not found: ID does not exist" containerID="b630f889110c7300c9da890a8eaa02793b3622cc521cd761c608e5ce064a8b93" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.498814 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b630f889110c7300c9da890a8eaa02793b3622cc521cd761c608e5ce064a8b93"} err="failed to get container status \"b630f889110c7300c9da890a8eaa02793b3622cc521cd761c608e5ce064a8b93\": rpc error: code = NotFound desc = could not find container \"b630f889110c7300c9da890a8eaa02793b3622cc521cd761c608e5ce064a8b93\": container with ID starting with b630f889110c7300c9da890a8eaa02793b3622cc521cd761c608e5ce064a8b93 not found: ID does not exist" Jan 26 16:43:19 crc kubenswrapper[4823]: I0126 16:43:19.571535 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="678d51af-fa3d-47d9-80e8-ce85ad72a386" path="/var/lib/kubelet/pods/678d51af-fa3d-47d9-80e8-ce85ad72a386/volumes" Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.451046 4823 generic.go:334] "Generic (PLEG): container finished" podID="b084919f-4a7e-4abc-aeb4-d9871de0ab15" containerID="a9c7f72631a0f62b58d245b4d6e4e60abef99364aad3c612a10e37d444da40d7" exitCode=0 Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.451189 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7lg6" event={"ID":"b084919f-4a7e-4abc-aeb4-d9871de0ab15","Type":"ContainerDied","Data":"a9c7f72631a0f62b58d245b4d6e4e60abef99364aad3c612a10e37d444da40d7"} Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.451427 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-f7lg6" event={"ID":"b084919f-4a7e-4abc-aeb4-d9871de0ab15","Type":"ContainerDied","Data":"b7a0977adf031de30e49d9ef7578172d947e34ee2c7ba8332779900019b156e3"} Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.451450 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7a0977adf031de30e49d9ef7578172d947e34ee2c7ba8332779900019b156e3" Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.464489 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.612745 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b084919f-4a7e-4abc-aeb4-d9871de0ab15-utilities\") pod \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\" (UID: \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\") " Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.612819 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tmbv\" (UniqueName: \"kubernetes.io/projected/b084919f-4a7e-4abc-aeb4-d9871de0ab15-kube-api-access-8tmbv\") pod \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\" (UID: \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\") " Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.613057 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b084919f-4a7e-4abc-aeb4-d9871de0ab15-catalog-content\") pod \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\" (UID: \"b084919f-4a7e-4abc-aeb4-d9871de0ab15\") " Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.614144 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b084919f-4a7e-4abc-aeb4-d9871de0ab15-utilities" (OuterVolumeSpecName: "utilities") pod "b084919f-4a7e-4abc-aeb4-d9871de0ab15" (UID: "b084919f-4a7e-4abc-aeb4-d9871de0ab15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.629165 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b084919f-4a7e-4abc-aeb4-d9871de0ab15-kube-api-access-8tmbv" (OuterVolumeSpecName: "kube-api-access-8tmbv") pod "b084919f-4a7e-4abc-aeb4-d9871de0ab15" (UID: "b084919f-4a7e-4abc-aeb4-d9871de0ab15"). InnerVolumeSpecName "kube-api-access-8tmbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.672576 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b084919f-4a7e-4abc-aeb4-d9871de0ab15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b084919f-4a7e-4abc-aeb4-d9871de0ab15" (UID: "b084919f-4a7e-4abc-aeb4-d9871de0ab15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.715218 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b084919f-4a7e-4abc-aeb4-d9871de0ab15-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.715259 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tmbv\" (UniqueName: \"kubernetes.io/projected/b084919f-4a7e-4abc-aeb4-d9871de0ab15-kube-api-access-8tmbv\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:20 crc kubenswrapper[4823]: I0126 16:43:20.715274 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b084919f-4a7e-4abc-aeb4-d9871de0ab15-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:43:21 crc kubenswrapper[4823]: I0126 16:43:21.471298 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-f7lg6" Jan 26 16:43:21 crc kubenswrapper[4823]: I0126 16:43:21.505656 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-f7lg6"] Jan 26 16:43:21 crc kubenswrapper[4823]: I0126 16:43:21.513540 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-f7lg6"] Jan 26 16:43:21 crc kubenswrapper[4823]: I0126 16:43:21.571490 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b084919f-4a7e-4abc-aeb4-d9871de0ab15" path="/var/lib/kubelet/pods/b084919f-4a7e-4abc-aeb4-d9871de0ab15/volumes" Jan 26 16:43:34 crc kubenswrapper[4823]: I0126 16:43:34.508675 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:43:34 crc kubenswrapper[4823]: I0126 16:43:34.509342 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:43:34 crc kubenswrapper[4823]: I0126 16:43:34.509515 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 16:43:34 crc kubenswrapper[4823]: I0126 16:43:34.510581 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"93fe1595a72846e027d18522cd5d122ac0ce13194f8ae8b8cfebd250e5574b1f"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:43:34 crc kubenswrapper[4823]: I0126 16:43:34.510656 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://93fe1595a72846e027d18522cd5d122ac0ce13194f8ae8b8cfebd250e5574b1f" gracePeriod=600 Jan 26 16:43:35 crc kubenswrapper[4823]: I0126 16:43:35.615342 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="93fe1595a72846e027d18522cd5d122ac0ce13194f8ae8b8cfebd250e5574b1f" exitCode=0 Jan 26 16:43:35 crc kubenswrapper[4823]: I0126 16:43:35.615430 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"93fe1595a72846e027d18522cd5d122ac0ce13194f8ae8b8cfebd250e5574b1f"} Jan 26 16:43:35 crc kubenswrapper[4823]: I0126 16:43:35.615914 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6"} Jan 26 16:43:35 crc kubenswrapper[4823]: I0126 16:43:35.615932 4823 scope.go:117] "RemoveContainer" containerID="c1547c44e8e3d08c7839e1f3bb71c3f97354291e46fcc99c4a979576765f4741" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.152767 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb"] Jan 26 16:45:00 crc kubenswrapper[4823]: E0126 16:45:00.153528 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678d51af-fa3d-47d9-80e8-ce85ad72a386" containerName="registry-server" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.153540 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="678d51af-fa3d-47d9-80e8-ce85ad72a386" containerName="registry-server" Jan 26 16:45:00 crc kubenswrapper[4823]: E0126 16:45:00.153552 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678d51af-fa3d-47d9-80e8-ce85ad72a386" containerName="extract-utilities" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.153558 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="678d51af-fa3d-47d9-80e8-ce85ad72a386" containerName="extract-utilities" Jan 26 16:45:00 crc kubenswrapper[4823]: E0126 16:45:00.153575 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b084919f-4a7e-4abc-aeb4-d9871de0ab15" containerName="extract-content" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.153583 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b084919f-4a7e-4abc-aeb4-d9871de0ab15" containerName="extract-content" Jan 26 16:45:00 crc kubenswrapper[4823]: E0126 16:45:00.153597 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="678d51af-fa3d-47d9-80e8-ce85ad72a386" containerName="extract-content" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.153603 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="678d51af-fa3d-47d9-80e8-ce85ad72a386" containerName="extract-content" Jan 26 16:45:00 crc kubenswrapper[4823]: E0126 16:45:00.153619 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b084919f-4a7e-4abc-aeb4-d9871de0ab15" containerName="extract-utilities" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.153626 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b084919f-4a7e-4abc-aeb4-d9871de0ab15" containerName="extract-utilities" Jan 26 16:45:00 crc kubenswrapper[4823]: E0126 16:45:00.153637 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b084919f-4a7e-4abc-aeb4-d9871de0ab15" containerName="registry-server" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.153643 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b084919f-4a7e-4abc-aeb4-d9871de0ab15" containerName="registry-server" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.153803 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b084919f-4a7e-4abc-aeb4-d9871de0ab15" containerName="registry-server" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.153815 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="678d51af-fa3d-47d9-80e8-ce85ad72a386" containerName="registry-server" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.154447 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.156625 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.156843 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.176768 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb"] Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.212852 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c32be40a-1fd9-47a5-97e9-dcbec990f96f-config-volume\") pod \"collect-profiles-29490765-rxhzb\" (UID: \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.213033 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-427jc\" (UniqueName: \"kubernetes.io/projected/c32be40a-1fd9-47a5-97e9-dcbec990f96f-kube-api-access-427jc\") pod \"collect-profiles-29490765-rxhzb\" (UID: \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.213104 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c32be40a-1fd9-47a5-97e9-dcbec990f96f-secret-volume\") pod \"collect-profiles-29490765-rxhzb\" (UID: \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.315500 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c32be40a-1fd9-47a5-97e9-dcbec990f96f-config-volume\") pod \"collect-profiles-29490765-rxhzb\" (UID: \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.315899 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-427jc\" (UniqueName: \"kubernetes.io/projected/c32be40a-1fd9-47a5-97e9-dcbec990f96f-kube-api-access-427jc\") pod \"collect-profiles-29490765-rxhzb\" (UID: \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.316051 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c32be40a-1fd9-47a5-97e9-dcbec990f96f-secret-volume\") pod \"collect-profiles-29490765-rxhzb\" (UID: \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.317631 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c32be40a-1fd9-47a5-97e9-dcbec990f96f-config-volume\") pod \"collect-profiles-29490765-rxhzb\" (UID: \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.324788 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c32be40a-1fd9-47a5-97e9-dcbec990f96f-secret-volume\") pod \"collect-profiles-29490765-rxhzb\" (UID: \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.338728 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-427jc\" (UniqueName: \"kubernetes.io/projected/c32be40a-1fd9-47a5-97e9-dcbec990f96f-kube-api-access-427jc\") pod \"collect-profiles-29490765-rxhzb\" (UID: \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.484190 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:00 crc kubenswrapper[4823]: I0126 16:45:00.940572 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb"] Jan 26 16:45:00 crc kubenswrapper[4823]: W0126 16:45:00.944688 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc32be40a_1fd9_47a5_97e9_dcbec990f96f.slice/crio-34ba8b962a7905399f88d2351112584f3a3c4ed72df478636f08ac61f5f66219 WatchSource:0}: Error finding container 34ba8b962a7905399f88d2351112584f3a3c4ed72df478636f08ac61f5f66219: Status 404 returned error can't find the container with id 34ba8b962a7905399f88d2351112584f3a3c4ed72df478636f08ac61f5f66219 Jan 26 16:45:01 crc kubenswrapper[4823]: I0126 16:45:01.495899 4823 generic.go:334] "Generic (PLEG): container finished" podID="c32be40a-1fd9-47a5-97e9-dcbec990f96f" containerID="662e2a237c0ee959abd0b34c1769febd6b23272096fa1814515671461f3ddbe4" exitCode=0 Jan 26 16:45:01 crc kubenswrapper[4823]: I0126 16:45:01.496036 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" event={"ID":"c32be40a-1fd9-47a5-97e9-dcbec990f96f","Type":"ContainerDied","Data":"662e2a237c0ee959abd0b34c1769febd6b23272096fa1814515671461f3ddbe4"} Jan 26 16:45:01 crc kubenswrapper[4823]: I0126 16:45:01.496156 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" event={"ID":"c32be40a-1fd9-47a5-97e9-dcbec990f96f","Type":"ContainerStarted","Data":"34ba8b962a7905399f88d2351112584f3a3c4ed72df478636f08ac61f5f66219"} Jan 26 16:45:02 crc kubenswrapper[4823]: I0126 16:45:02.791342 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:02 crc kubenswrapper[4823]: I0126 16:45:02.963820 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c32be40a-1fd9-47a5-97e9-dcbec990f96f-secret-volume\") pod \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\" (UID: \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\") " Jan 26 16:45:02 crc kubenswrapper[4823]: I0126 16:45:02.964033 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c32be40a-1fd9-47a5-97e9-dcbec990f96f-config-volume\") pod \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\" (UID: \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\") " Jan 26 16:45:02 crc kubenswrapper[4823]: I0126 16:45:02.964086 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-427jc\" (UniqueName: \"kubernetes.io/projected/c32be40a-1fd9-47a5-97e9-dcbec990f96f-kube-api-access-427jc\") pod \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\" (UID: \"c32be40a-1fd9-47a5-97e9-dcbec990f96f\") " Jan 26 16:45:02 crc kubenswrapper[4823]: I0126 16:45:02.964971 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c32be40a-1fd9-47a5-97e9-dcbec990f96f-config-volume" (OuterVolumeSpecName: "config-volume") pod "c32be40a-1fd9-47a5-97e9-dcbec990f96f" (UID: "c32be40a-1fd9-47a5-97e9-dcbec990f96f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 16:45:02 crc kubenswrapper[4823]: I0126 16:45:02.970085 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c32be40a-1fd9-47a5-97e9-dcbec990f96f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c32be40a-1fd9-47a5-97e9-dcbec990f96f" (UID: "c32be40a-1fd9-47a5-97e9-dcbec990f96f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 16:45:02 crc kubenswrapper[4823]: I0126 16:45:02.970463 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c32be40a-1fd9-47a5-97e9-dcbec990f96f-kube-api-access-427jc" (OuterVolumeSpecName: "kube-api-access-427jc") pod "c32be40a-1fd9-47a5-97e9-dcbec990f96f" (UID: "c32be40a-1fd9-47a5-97e9-dcbec990f96f"). InnerVolumeSpecName "kube-api-access-427jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:45:03 crc kubenswrapper[4823]: I0126 16:45:03.066051 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c32be40a-1fd9-47a5-97e9-dcbec990f96f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:45:03 crc kubenswrapper[4823]: I0126 16:45:03.066081 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-427jc\" (UniqueName: \"kubernetes.io/projected/c32be40a-1fd9-47a5-97e9-dcbec990f96f-kube-api-access-427jc\") on node \"crc\" DevicePath \"\"" Jan 26 16:45:03 crc kubenswrapper[4823]: I0126 16:45:03.066092 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c32be40a-1fd9-47a5-97e9-dcbec990f96f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 16:45:03 crc kubenswrapper[4823]: I0126 16:45:03.515062 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" event={"ID":"c32be40a-1fd9-47a5-97e9-dcbec990f96f","Type":"ContainerDied","Data":"34ba8b962a7905399f88d2351112584f3a3c4ed72df478636f08ac61f5f66219"} Jan 26 16:45:03 crc kubenswrapper[4823]: I0126 16:45:03.515112 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34ba8b962a7905399f88d2351112584f3a3c4ed72df478636f08ac61f5f66219" Jan 26 16:45:03 crc kubenswrapper[4823]: I0126 16:45:03.515583 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb" Jan 26 16:45:03 crc kubenswrapper[4823]: I0126 16:45:03.868910 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs"] Jan 26 16:45:03 crc kubenswrapper[4823]: I0126 16:45:03.877346 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490720-f59hs"] Jan 26 16:45:05 crc kubenswrapper[4823]: I0126 16:45:05.573720 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40c127e7-9656-4045-99f6-4c6403877cbb" path="/var/lib/kubelet/pods/40c127e7-9656-4045-99f6-4c6403877cbb/volumes" Jan 26 16:45:09 crc kubenswrapper[4823]: I0126 16:45:09.025612 4823 scope.go:117] "RemoveContainer" containerID="46f690ecc5480426be61e431c17d89bd1043d825249aa66167b9bca91c63b708" Jan 26 16:45:34 crc kubenswrapper[4823]: I0126 16:45:34.508461 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:45:34 crc kubenswrapper[4823]: I0126 16:45:34.509136 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:46:04 crc kubenswrapper[4823]: I0126 16:46:04.508818 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:46:04 crc kubenswrapper[4823]: I0126 16:46:04.509343 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:46:34 crc kubenswrapper[4823]: I0126 16:46:34.508665 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:46:34 crc kubenswrapper[4823]: I0126 16:46:34.509085 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:46:34 crc kubenswrapper[4823]: I0126 16:46:34.509131 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 16:46:34 crc kubenswrapper[4823]: I0126 16:46:34.510028 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:46:34 crc kubenswrapper[4823]: I0126 16:46:34.510078 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" gracePeriod=600 Jan 26 16:46:34 crc kubenswrapper[4823]: E0126 16:46:34.631190 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:46:35 crc kubenswrapper[4823]: I0126 16:46:35.491127 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" exitCode=0 Jan 26 16:46:35 crc kubenswrapper[4823]: I0126 16:46:35.491194 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6"} Jan 26 16:46:35 crc kubenswrapper[4823]: I0126 16:46:35.491485 4823 scope.go:117] "RemoveContainer" containerID="93fe1595a72846e027d18522cd5d122ac0ce13194f8ae8b8cfebd250e5574b1f" Jan 26 16:46:35 crc kubenswrapper[4823]: I0126 16:46:35.492160 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:46:35 crc kubenswrapper[4823]: E0126 16:46:35.492482 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:46:48 crc kubenswrapper[4823]: I0126 16:46:48.561277 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:46:48 crc kubenswrapper[4823]: E0126 16:46:48.561979 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:47:01 crc kubenswrapper[4823]: I0126 16:47:01.561441 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:47:01 crc kubenswrapper[4823]: E0126 16:47:01.562108 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:47:15 crc kubenswrapper[4823]: I0126 16:47:15.560896 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:47:15 crc kubenswrapper[4823]: E0126 16:47:15.561777 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:47:28 crc kubenswrapper[4823]: I0126 16:47:28.559910 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:47:28 crc kubenswrapper[4823]: E0126 16:47:28.560710 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.407853 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qc2dc"] Jan 26 16:47:32 crc kubenswrapper[4823]: E0126 16:47:32.409556 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c32be40a-1fd9-47a5-97e9-dcbec990f96f" containerName="collect-profiles" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.409579 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="c32be40a-1fd9-47a5-97e9-dcbec990f96f" containerName="collect-profiles" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.409885 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="c32be40a-1fd9-47a5-97e9-dcbec990f96f" containerName="collect-profiles" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.411855 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.429998 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc2dc"] Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.474509 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp4fs\" (UniqueName: \"kubernetes.io/projected/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-kube-api-access-bp4fs\") pod \"redhat-marketplace-qc2dc\" (UID: \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\") " pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.474602 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-utilities\") pod \"redhat-marketplace-qc2dc\" (UID: \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\") " pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.474624 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-catalog-content\") pod \"redhat-marketplace-qc2dc\" (UID: \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\") " pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.576178 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp4fs\" (UniqueName: \"kubernetes.io/projected/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-kube-api-access-bp4fs\") pod \"redhat-marketplace-qc2dc\" (UID: \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\") " pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.576259 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-utilities\") pod \"redhat-marketplace-qc2dc\" (UID: \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\") " pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.576283 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-catalog-content\") pod \"redhat-marketplace-qc2dc\" (UID: \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\") " pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.576949 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-catalog-content\") pod \"redhat-marketplace-qc2dc\" (UID: \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\") " pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.576999 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-utilities\") pod \"redhat-marketplace-qc2dc\" (UID: \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\") " pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.599203 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp4fs\" (UniqueName: \"kubernetes.io/projected/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-kube-api-access-bp4fs\") pod \"redhat-marketplace-qc2dc\" (UID: \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\") " pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:32 crc kubenswrapper[4823]: I0126 16:47:32.745862 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:33 crc kubenswrapper[4823]: I0126 16:47:33.267827 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc2dc"] Jan 26 16:47:34 crc kubenswrapper[4823]: I0126 16:47:34.021276 4823 generic.go:334] "Generic (PLEG): container finished" podID="a5fa3c35-2767-458b-83cc-f1b4043ae8fc" containerID="dfa51fe45710577be45b964c7a8c2b5a9c3d44ceac739ba2a11c5e997a753e93" exitCode=0 Jan 26 16:47:34 crc kubenswrapper[4823]: I0126 16:47:34.021398 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc2dc" event={"ID":"a5fa3c35-2767-458b-83cc-f1b4043ae8fc","Type":"ContainerDied","Data":"dfa51fe45710577be45b964c7a8c2b5a9c3d44ceac739ba2a11c5e997a753e93"} Jan 26 16:47:34 crc kubenswrapper[4823]: I0126 16:47:34.021726 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc2dc" event={"ID":"a5fa3c35-2767-458b-83cc-f1b4043ae8fc","Type":"ContainerStarted","Data":"c987cad42ec1b29513fb7e21cbbe5dca74b6ce0cd846e9433845d338b04b7a7e"} Jan 26 16:47:34 crc kubenswrapper[4823]: I0126 16:47:34.025584 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:47:35 crc kubenswrapper[4823]: I0126 16:47:35.032193 4823 generic.go:334] "Generic (PLEG): container finished" podID="a5fa3c35-2767-458b-83cc-f1b4043ae8fc" containerID="b32f82ec962cf1b17f6cb4ecd3137756ab9ab7e71e9ab331eda19503cb0fecea" exitCode=0 Jan 26 16:47:35 crc kubenswrapper[4823]: I0126 16:47:35.032293 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc2dc" event={"ID":"a5fa3c35-2767-458b-83cc-f1b4043ae8fc","Type":"ContainerDied","Data":"b32f82ec962cf1b17f6cb4ecd3137756ab9ab7e71e9ab331eda19503cb0fecea"} Jan 26 16:47:36 crc kubenswrapper[4823]: I0126 16:47:36.044897 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc2dc" event={"ID":"a5fa3c35-2767-458b-83cc-f1b4043ae8fc","Type":"ContainerStarted","Data":"4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827"} Jan 26 16:47:36 crc kubenswrapper[4823]: I0126 16:47:36.071092 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qc2dc" podStartSLOduration=2.661096308 podStartE2EDuration="4.071072694s" podCreationTimestamp="2026-01-26 16:47:32 +0000 UTC" firstStartedPulling="2026-01-26 16:47:34.025331533 +0000 UTC m=+7250.710794638" lastFinishedPulling="2026-01-26 16:47:35.435307919 +0000 UTC m=+7252.120771024" observedRunningTime="2026-01-26 16:47:36.068199875 +0000 UTC m=+7252.753662990" watchObservedRunningTime="2026-01-26 16:47:36.071072694 +0000 UTC m=+7252.756535799" Jan 26 16:47:39 crc kubenswrapper[4823]: I0126 16:47:39.561086 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:47:39 crc kubenswrapper[4823]: E0126 16:47:39.561896 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:47:42 crc kubenswrapper[4823]: I0126 16:47:42.746164 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:42 crc kubenswrapper[4823]: I0126 16:47:42.746568 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:42 crc kubenswrapper[4823]: I0126 16:47:42.799842 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:43 crc kubenswrapper[4823]: I0126 16:47:43.164378 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:43 crc kubenswrapper[4823]: I0126 16:47:43.221037 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc2dc"] Jan 26 16:47:45 crc kubenswrapper[4823]: I0126 16:47:45.120785 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qc2dc" podUID="a5fa3c35-2767-458b-83cc-f1b4043ae8fc" containerName="registry-server" containerID="cri-o://4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827" gracePeriod=2 Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.044884 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.063456 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-catalog-content\") pod \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\" (UID: \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\") " Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.063656 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bp4fs\" (UniqueName: \"kubernetes.io/projected/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-kube-api-access-bp4fs\") pod \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\" (UID: \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\") " Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.063721 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-utilities\") pod \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\" (UID: \"a5fa3c35-2767-458b-83cc-f1b4043ae8fc\") " Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.064731 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-utilities" (OuterVolumeSpecName: "utilities") pod "a5fa3c35-2767-458b-83cc-f1b4043ae8fc" (UID: "a5fa3c35-2767-458b-83cc-f1b4043ae8fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.079534 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-kube-api-access-bp4fs" (OuterVolumeSpecName: "kube-api-access-bp4fs") pod "a5fa3c35-2767-458b-83cc-f1b4043ae8fc" (UID: "a5fa3c35-2767-458b-83cc-f1b4043ae8fc"). InnerVolumeSpecName "kube-api-access-bp4fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.089898 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5fa3c35-2767-458b-83cc-f1b4043ae8fc" (UID: "a5fa3c35-2767-458b-83cc-f1b4043ae8fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.140092 4823 generic.go:334] "Generic (PLEG): container finished" podID="a5fa3c35-2767-458b-83cc-f1b4043ae8fc" containerID="4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827" exitCode=0 Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.140142 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc2dc" event={"ID":"a5fa3c35-2767-458b-83cc-f1b4043ae8fc","Type":"ContainerDied","Data":"4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827"} Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.140170 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qc2dc" event={"ID":"a5fa3c35-2767-458b-83cc-f1b4043ae8fc","Type":"ContainerDied","Data":"c987cad42ec1b29513fb7e21cbbe5dca74b6ce0cd846e9433845d338b04b7a7e"} Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.140188 4823 scope.go:117] "RemoveContainer" containerID="4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.140335 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qc2dc" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.167491 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bp4fs\" (UniqueName: \"kubernetes.io/projected/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-kube-api-access-bp4fs\") on node \"crc\" DevicePath \"\"" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.167550 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.167563 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5fa3c35-2767-458b-83cc-f1b4043ae8fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.200189 4823 scope.go:117] "RemoveContainer" containerID="b32f82ec962cf1b17f6cb4ecd3137756ab9ab7e71e9ab331eda19503cb0fecea" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.203033 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc2dc"] Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.213881 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qc2dc"] Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.246992 4823 scope.go:117] "RemoveContainer" containerID="dfa51fe45710577be45b964c7a8c2b5a9c3d44ceac739ba2a11c5e997a753e93" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.286207 4823 scope.go:117] "RemoveContainer" containerID="4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827" Jan 26 16:47:46 crc kubenswrapper[4823]: E0126 16:47:46.287455 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827\": container with ID starting with 4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827 not found: ID does not exist" containerID="4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.287489 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827"} err="failed to get container status \"4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827\": rpc error: code = NotFound desc = could not find container \"4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827\": container with ID starting with 4708aa0ff8b0b2e48a0c8407ee66848e62fe9d0aeaf751c27537ae08a6d93827 not found: ID does not exist" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.287533 4823 scope.go:117] "RemoveContainer" containerID="b32f82ec962cf1b17f6cb4ecd3137756ab9ab7e71e9ab331eda19503cb0fecea" Jan 26 16:47:46 crc kubenswrapper[4823]: E0126 16:47:46.287936 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b32f82ec962cf1b17f6cb4ecd3137756ab9ab7e71e9ab331eda19503cb0fecea\": container with ID starting with b32f82ec962cf1b17f6cb4ecd3137756ab9ab7e71e9ab331eda19503cb0fecea not found: ID does not exist" containerID="b32f82ec962cf1b17f6cb4ecd3137756ab9ab7e71e9ab331eda19503cb0fecea" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.287982 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b32f82ec962cf1b17f6cb4ecd3137756ab9ab7e71e9ab331eda19503cb0fecea"} err="failed to get container status \"b32f82ec962cf1b17f6cb4ecd3137756ab9ab7e71e9ab331eda19503cb0fecea\": rpc error: code = NotFound desc = could not find container \"b32f82ec962cf1b17f6cb4ecd3137756ab9ab7e71e9ab331eda19503cb0fecea\": container with ID starting with b32f82ec962cf1b17f6cb4ecd3137756ab9ab7e71e9ab331eda19503cb0fecea not found: ID does not exist" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.288001 4823 scope.go:117] "RemoveContainer" containerID="dfa51fe45710577be45b964c7a8c2b5a9c3d44ceac739ba2a11c5e997a753e93" Jan 26 16:47:46 crc kubenswrapper[4823]: E0126 16:47:46.288276 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa51fe45710577be45b964c7a8c2b5a9c3d44ceac739ba2a11c5e997a753e93\": container with ID starting with dfa51fe45710577be45b964c7a8c2b5a9c3d44ceac739ba2a11c5e997a753e93 not found: ID does not exist" containerID="dfa51fe45710577be45b964c7a8c2b5a9c3d44ceac739ba2a11c5e997a753e93" Jan 26 16:47:46 crc kubenswrapper[4823]: I0126 16:47:46.288305 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa51fe45710577be45b964c7a8c2b5a9c3d44ceac739ba2a11c5e997a753e93"} err="failed to get container status \"dfa51fe45710577be45b964c7a8c2b5a9c3d44ceac739ba2a11c5e997a753e93\": rpc error: code = NotFound desc = could not find container \"dfa51fe45710577be45b964c7a8c2b5a9c3d44ceac739ba2a11c5e997a753e93\": container with ID starting with dfa51fe45710577be45b964c7a8c2b5a9c3d44ceac739ba2a11c5e997a753e93 not found: ID does not exist" Jan 26 16:47:47 crc kubenswrapper[4823]: I0126 16:47:47.585931 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5fa3c35-2767-458b-83cc-f1b4043ae8fc" path="/var/lib/kubelet/pods/a5fa3c35-2767-458b-83cc-f1b4043ae8fc/volumes" Jan 26 16:47:54 crc kubenswrapper[4823]: I0126 16:47:54.560794 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:47:54 crc kubenswrapper[4823]: E0126 16:47:54.561643 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:48:07 crc kubenswrapper[4823]: I0126 16:48:07.560629 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:48:07 crc kubenswrapper[4823]: E0126 16:48:07.562211 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:48:20 crc kubenswrapper[4823]: I0126 16:48:20.560818 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:48:20 crc kubenswrapper[4823]: E0126 16:48:20.566246 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:48:32 crc kubenswrapper[4823]: I0126 16:48:32.560153 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:48:32 crc kubenswrapper[4823]: E0126 16:48:32.561022 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:48:45 crc kubenswrapper[4823]: I0126 16:48:45.561298 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:48:45 crc kubenswrapper[4823]: E0126 16:48:45.562635 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:48:59 crc kubenswrapper[4823]: I0126 16:48:59.561438 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:48:59 crc kubenswrapper[4823]: E0126 16:48:59.562511 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:49:09 crc kubenswrapper[4823]: I0126 16:49:09.188610 4823 scope.go:117] "RemoveContainer" containerID="7010eb57b128039617660b4607119ef3c95d1eec54a2b6dafca74efd31c2adc5" Jan 26 16:49:09 crc kubenswrapper[4823]: I0126 16:49:09.219112 4823 scope.go:117] "RemoveContainer" containerID="0a2d860b18d176031b5e3184b174fbb259762ab5387792cd9590de5c3f2ac921" Jan 26 16:49:12 crc kubenswrapper[4823]: I0126 16:49:12.560428 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:49:12 crc kubenswrapper[4823]: E0126 16:49:12.561113 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:49:23 crc kubenswrapper[4823]: I0126 16:49:23.574955 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:49:23 crc kubenswrapper[4823]: E0126 16:49:23.575862 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:49:34 crc kubenswrapper[4823]: I0126 16:49:34.560892 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:49:34 crc kubenswrapper[4823]: E0126 16:49:34.562967 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:49:48 crc kubenswrapper[4823]: I0126 16:49:48.561200 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:49:48 crc kubenswrapper[4823]: E0126 16:49:48.562063 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:50:02 crc kubenswrapper[4823]: I0126 16:50:02.560755 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:50:02 crc kubenswrapper[4823]: E0126 16:50:02.561581 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:50:09 crc kubenswrapper[4823]: I0126 16:50:09.294988 4823 scope.go:117] "RemoveContainer" containerID="a9c7f72631a0f62b58d245b4d6e4e60abef99364aad3c612a10e37d444da40d7" Jan 26 16:50:15 crc kubenswrapper[4823]: I0126 16:50:15.561645 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:50:15 crc kubenswrapper[4823]: E0126 16:50:15.562510 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:50:26 crc kubenswrapper[4823]: I0126 16:50:26.560083 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:50:26 crc kubenswrapper[4823]: E0126 16:50:26.560886 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:50:40 crc kubenswrapper[4823]: I0126 16:50:40.560959 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:50:40 crc kubenswrapper[4823]: E0126 16:50:40.561800 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:50:51 crc kubenswrapper[4823]: I0126 16:50:51.560628 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:50:51 crc kubenswrapper[4823]: E0126 16:50:51.561419 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:51:04 crc kubenswrapper[4823]: I0126 16:51:04.560621 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:51:04 crc kubenswrapper[4823]: E0126 16:51:04.561480 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:51:18 crc kubenswrapper[4823]: I0126 16:51:18.560317 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:51:18 crc kubenswrapper[4823]: E0126 16:51:18.563012 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:51:31 crc kubenswrapper[4823]: I0126 16:51:31.561113 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:51:31 crc kubenswrapper[4823]: E0126 16:51:31.562091 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:51:42 crc kubenswrapper[4823]: I0126 16:51:42.561181 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:51:43 crc kubenswrapper[4823]: I0126 16:51:43.255867 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"2a0dfa55f0438eb89687dcb2f4179a461bb5c63362a881c36870444019269bf6"} Jan 26 16:53:04 crc kubenswrapper[4823]: I0126 16:53:04.830603 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-c56f9" podUID="2cdca653-4a4b-4452-9a00-5667349cb42a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:53:06 crc kubenswrapper[4823]: I0126 16:53:06.786487 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="953ca111-757e-44e8-9f00-1b4576cb4b3c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 26 16:53:11 crc kubenswrapper[4823]: I0126 16:53:11.785811 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="953ca111-757e-44e8-9f00-1b4576cb4b3c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 26 16:53:16 crc kubenswrapper[4823]: I0126 16:53:16.784297 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="953ca111-757e-44e8-9f00-1b4576cb4b3c" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 26 16:53:16 crc kubenswrapper[4823]: I0126 16:53:16.784750 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 26 16:53:16 crc kubenswrapper[4823]: I0126 16:53:16.785565 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"90618cf9746e520f6822fde308dbafda564a0fb2e18990eda1e6c7d304d99026"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 26 16:53:16 crc kubenswrapper[4823]: I0126 16:53:16.785646 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="953ca111-757e-44e8-9f00-1b4576cb4b3c" containerName="ceilometer-central-agent" containerID="cri-o://90618cf9746e520f6822fde308dbafda564a0fb2e18990eda1e6c7d304d99026" gracePeriod=30 Jan 26 16:53:18 crc kubenswrapper[4823]: I0126 16:53:18.115509 4823 patch_prober.go:28] interesting pod/controller-manager-9c8d4595b-mwrsz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:53:18 crc kubenswrapper[4823]: I0126 16:53:18.115837 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" podUID="e759f682-8fa0-4299-bf8c-bdc87ac6a240" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 26 16:53:20 crc kubenswrapper[4823]: I0126 16:53:20.981624 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" podUID="0e7ff918-aecf-4718-912b-d85f1dbd1799" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:53:26 crc kubenswrapper[4823]: I0126 16:53:26.785829 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="953ca111-757e-44e8-9f00-1b4576cb4b3c" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 26 16:53:26 crc kubenswrapper[4823]: I0126 16:53:26.891269 4823 generic.go:334] "Generic (PLEG): container finished" podID="953ca111-757e-44e8-9f00-1b4576cb4b3c" containerID="90618cf9746e520f6822fde308dbafda564a0fb2e18990eda1e6c7d304d99026" exitCode=-1 Jan 26 16:53:26 crc kubenswrapper[4823]: I0126 16:53:26.891315 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953ca111-757e-44e8-9f00-1b4576cb4b3c","Type":"ContainerDied","Data":"90618cf9746e520f6822fde308dbafda564a0fb2e18990eda1e6c7d304d99026"} Jan 26 16:53:28 crc kubenswrapper[4823]: I0126 16:53:28.113326 4823 patch_prober.go:28] interesting pod/controller-manager-9c8d4595b-mwrsz container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:53:28 crc kubenswrapper[4823]: I0126 16:53:28.113702 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" podUID="e759f682-8fa0-4299-bf8c-bdc87ac6a240" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:53:28 crc kubenswrapper[4823]: I0126 16:53:28.113511 4823 patch_prober.go:28] interesting pod/controller-manager-9c8d4595b-mwrsz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:53:28 crc kubenswrapper[4823]: I0126 16:53:28.113942 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" podUID="e759f682-8fa0-4299-bf8c-bdc87ac6a240" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:53:30 crc kubenswrapper[4823]: I0126 16:53:30.982538 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" podUID="0e7ff918-aecf-4718-912b-d85f1dbd1799" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.586907 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bw9bx"] Jan 26 16:53:35 crc kubenswrapper[4823]: E0126 16:53:35.588037 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fa3c35-2767-458b-83cc-f1b4043ae8fc" containerName="extract-content" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.588056 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fa3c35-2767-458b-83cc-f1b4043ae8fc" containerName="extract-content" Jan 26 16:53:35 crc kubenswrapper[4823]: E0126 16:53:35.588082 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fa3c35-2767-458b-83cc-f1b4043ae8fc" containerName="registry-server" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.588089 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fa3c35-2767-458b-83cc-f1b4043ae8fc" containerName="registry-server" Jan 26 16:53:35 crc kubenswrapper[4823]: E0126 16:53:35.588111 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5fa3c35-2767-458b-83cc-f1b4043ae8fc" containerName="extract-utilities" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.588119 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5fa3c35-2767-458b-83cc-f1b4043ae8fc" containerName="extract-utilities" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.588344 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5fa3c35-2767-458b-83cc-f1b4043ae8fc" containerName="registry-server" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.590064 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.603396 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bw9bx"] Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.659982 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhsvq\" (UniqueName: \"kubernetes.io/projected/b26cbd48-f73b-44b8-9758-c44a58311eb7-kube-api-access-fhsvq\") pod \"community-operators-bw9bx\" (UID: \"b26cbd48-f73b-44b8-9758-c44a58311eb7\") " pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.660030 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b26cbd48-f73b-44b8-9758-c44a58311eb7-utilities\") pod \"community-operators-bw9bx\" (UID: \"b26cbd48-f73b-44b8-9758-c44a58311eb7\") " pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.660130 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b26cbd48-f73b-44b8-9758-c44a58311eb7-catalog-content\") pod \"community-operators-bw9bx\" (UID: \"b26cbd48-f73b-44b8-9758-c44a58311eb7\") " pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.762186 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhsvq\" (UniqueName: \"kubernetes.io/projected/b26cbd48-f73b-44b8-9758-c44a58311eb7-kube-api-access-fhsvq\") pod \"community-operators-bw9bx\" (UID: \"b26cbd48-f73b-44b8-9758-c44a58311eb7\") " pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.762235 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b26cbd48-f73b-44b8-9758-c44a58311eb7-utilities\") pod \"community-operators-bw9bx\" (UID: \"b26cbd48-f73b-44b8-9758-c44a58311eb7\") " pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.762389 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b26cbd48-f73b-44b8-9758-c44a58311eb7-catalog-content\") pod \"community-operators-bw9bx\" (UID: \"b26cbd48-f73b-44b8-9758-c44a58311eb7\") " pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.763037 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b26cbd48-f73b-44b8-9758-c44a58311eb7-catalog-content\") pod \"community-operators-bw9bx\" (UID: \"b26cbd48-f73b-44b8-9758-c44a58311eb7\") " pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.763051 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b26cbd48-f73b-44b8-9758-c44a58311eb7-utilities\") pod \"community-operators-bw9bx\" (UID: \"b26cbd48-f73b-44b8-9758-c44a58311eb7\") " pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.789343 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhsvq\" (UniqueName: \"kubernetes.io/projected/b26cbd48-f73b-44b8-9758-c44a58311eb7-kube-api-access-fhsvq\") pod \"community-operators-bw9bx\" (UID: \"b26cbd48-f73b-44b8-9758-c44a58311eb7\") " pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:35 crc kubenswrapper[4823]: I0126 16:53:35.916295 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:36 crc kubenswrapper[4823]: I0126 16:53:36.186771 4823 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-n6rwb container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:53:36 crc kubenswrapper[4823]: I0126 16:53:36.186833 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-n6rwb" podUID="4fff3391-dc10-4c2b-8868-40123c8147e6" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.65:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:53:38 crc kubenswrapper[4823]: I0126 16:53:38.111487 4823 patch_prober.go:28] interesting pod/controller-manager-9c8d4595b-mwrsz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:53:38 crc kubenswrapper[4823]: I0126 16:53:38.111916 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" podUID="e759f682-8fa0-4299-bf8c-bdc87ac6a240" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:53:38 crc kubenswrapper[4823]: I0126 16:53:38.111571 4823 patch_prober.go:28] interesting pod/controller-manager-9c8d4595b-mwrsz container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 26 16:53:38 crc kubenswrapper[4823]: I0126 16:53:38.112076 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-9c8d4595b-mwrsz" podUID="e759f682-8fa0-4299-bf8c-bdc87ac6a240" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 26 16:53:41 crc kubenswrapper[4823]: I0126 16:53:41.022623 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" podUID="0e7ff918-aecf-4718-912b-d85f1dbd1799" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:53:41 crc kubenswrapper[4823]: I0126 16:53:41.022636 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" podUID="0e7ff918-aecf-4718-912b-d85f1dbd1799" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:53:41 crc kubenswrapper[4823]: I0126 16:53:41.023227 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 16:53:42 crc kubenswrapper[4823]: I0126 16:53:42.066670 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" podUID="0e7ff918-aecf-4718-912b-d85f1dbd1799" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 16:53:43 crc kubenswrapper[4823]: I0126 16:53:43.633855 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 16:53:43 crc kubenswrapper[4823]: I0126 16:53:43.647853 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bw9bx"] Jan 26 16:53:44 crc kubenswrapper[4823]: I0126 16:53:44.054811 4823 generic.go:334] "Generic (PLEG): container finished" podID="b26cbd48-f73b-44b8-9758-c44a58311eb7" containerID="95c98b5c8456a2c609d7face73c8f501132c0c61de29efd2f181d5bedfb9f175" exitCode=0 Jan 26 16:53:44 crc kubenswrapper[4823]: I0126 16:53:44.054915 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw9bx" event={"ID":"b26cbd48-f73b-44b8-9758-c44a58311eb7","Type":"ContainerDied","Data":"95c98b5c8456a2c609d7face73c8f501132c0c61de29efd2f181d5bedfb9f175"} Jan 26 16:53:44 crc kubenswrapper[4823]: I0126 16:53:44.054988 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw9bx" event={"ID":"b26cbd48-f73b-44b8-9758-c44a58311eb7","Type":"ContainerStarted","Data":"05fc544a91daef44a8490dddaaff4c7a6e68ec7f7e4e1b382defb43fb3410e7f"} Jan 26 16:53:45 crc kubenswrapper[4823]: I0126 16:53:45.072870 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"953ca111-757e-44e8-9f00-1b4576cb4b3c","Type":"ContainerStarted","Data":"b0301683d4925757c9676f8948867ea7f4e4505a7a2361305c99067625d584eb"} Jan 26 16:53:46 crc kubenswrapper[4823]: I0126 16:53:46.086291 4823 generic.go:334] "Generic (PLEG): container finished" podID="b26cbd48-f73b-44b8-9758-c44a58311eb7" containerID="1c11e3e25bea8b5ac3392d5c614fc99d457eac2a7605c992a02600309d6ed7ad" exitCode=0 Jan 26 16:53:46 crc kubenswrapper[4823]: I0126 16:53:46.086385 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw9bx" event={"ID":"b26cbd48-f73b-44b8-9758-c44a58311eb7","Type":"ContainerDied","Data":"1c11e3e25bea8b5ac3392d5c614fc99d457eac2a7605c992a02600309d6ed7ad"} Jan 26 16:53:47 crc kubenswrapper[4823]: I0126 16:53:47.095525 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw9bx" event={"ID":"b26cbd48-f73b-44b8-9758-c44a58311eb7","Type":"ContainerStarted","Data":"0d311cd0cdc9767e4da707ed030cafd4ac323dfa30880d5da26348cbab7f57fa"} Jan 26 16:53:47 crc kubenswrapper[4823]: I0126 16:53:47.121911 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bw9bx" podStartSLOduration=9.518713893 podStartE2EDuration="12.121891674s" podCreationTimestamp="2026-01-26 16:53:35 +0000 UTC" firstStartedPulling="2026-01-26 16:53:44.057610745 +0000 UTC m=+7620.743073860" lastFinishedPulling="2026-01-26 16:53:46.660788536 +0000 UTC m=+7623.346251641" observedRunningTime="2026-01-26 16:53:47.12174818 +0000 UTC m=+7623.807211285" watchObservedRunningTime="2026-01-26 16:53:47.121891674 +0000 UTC m=+7623.807354779" Jan 26 16:53:49 crc kubenswrapper[4823]: I0126 16:53:49.943327 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-7fc556f645-qgpp5" Jan 26 16:53:55 crc kubenswrapper[4823]: I0126 16:53:55.916873 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:55 crc kubenswrapper[4823]: I0126 16:53:55.918581 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:55 crc kubenswrapper[4823]: I0126 16:53:55.967289 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:56 crc kubenswrapper[4823]: I0126 16:53:56.224552 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:56 crc kubenswrapper[4823]: I0126 16:53:56.278771 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bw9bx"] Jan 26 16:53:58 crc kubenswrapper[4823]: I0126 16:53:58.192175 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bw9bx" podUID="b26cbd48-f73b-44b8-9758-c44a58311eb7" containerName="registry-server" containerID="cri-o://0d311cd0cdc9767e4da707ed030cafd4ac323dfa30880d5da26348cbab7f57fa" gracePeriod=2 Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.206789 4823 generic.go:334] "Generic (PLEG): container finished" podID="b26cbd48-f73b-44b8-9758-c44a58311eb7" containerID="0d311cd0cdc9767e4da707ed030cafd4ac323dfa30880d5da26348cbab7f57fa" exitCode=0 Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.206867 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw9bx" event={"ID":"b26cbd48-f73b-44b8-9758-c44a58311eb7","Type":"ContainerDied","Data":"0d311cd0cdc9767e4da707ed030cafd4ac323dfa30880d5da26348cbab7f57fa"} Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.208343 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bw9bx" event={"ID":"b26cbd48-f73b-44b8-9758-c44a58311eb7","Type":"ContainerDied","Data":"05fc544a91daef44a8490dddaaff4c7a6e68ec7f7e4e1b382defb43fb3410e7f"} Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.208399 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05fc544a91daef44a8490dddaaff4c7a6e68ec7f7e4e1b382defb43fb3410e7f" Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.237470 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.333203 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhsvq\" (UniqueName: \"kubernetes.io/projected/b26cbd48-f73b-44b8-9758-c44a58311eb7-kube-api-access-fhsvq\") pod \"b26cbd48-f73b-44b8-9758-c44a58311eb7\" (UID: \"b26cbd48-f73b-44b8-9758-c44a58311eb7\") " Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.333911 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b26cbd48-f73b-44b8-9758-c44a58311eb7-catalog-content\") pod \"b26cbd48-f73b-44b8-9758-c44a58311eb7\" (UID: \"b26cbd48-f73b-44b8-9758-c44a58311eb7\") " Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.334239 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b26cbd48-f73b-44b8-9758-c44a58311eb7-utilities\") pod \"b26cbd48-f73b-44b8-9758-c44a58311eb7\" (UID: \"b26cbd48-f73b-44b8-9758-c44a58311eb7\") " Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.338255 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b26cbd48-f73b-44b8-9758-c44a58311eb7-utilities" (OuterVolumeSpecName: "utilities") pod "b26cbd48-f73b-44b8-9758-c44a58311eb7" (UID: "b26cbd48-f73b-44b8-9758-c44a58311eb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.357611 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26cbd48-f73b-44b8-9758-c44a58311eb7-kube-api-access-fhsvq" (OuterVolumeSpecName: "kube-api-access-fhsvq") pod "b26cbd48-f73b-44b8-9758-c44a58311eb7" (UID: "b26cbd48-f73b-44b8-9758-c44a58311eb7"). InnerVolumeSpecName "kube-api-access-fhsvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.406922 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b26cbd48-f73b-44b8-9758-c44a58311eb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b26cbd48-f73b-44b8-9758-c44a58311eb7" (UID: "b26cbd48-f73b-44b8-9758-c44a58311eb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.438814 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b26cbd48-f73b-44b8-9758-c44a58311eb7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.438858 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b26cbd48-f73b-44b8-9758-c44a58311eb7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:53:59 crc kubenswrapper[4823]: I0126 16:53:59.438869 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhsvq\" (UniqueName: \"kubernetes.io/projected/b26cbd48-f73b-44b8-9758-c44a58311eb7-kube-api-access-fhsvq\") on node \"crc\" DevicePath \"\"" Jan 26 16:54:00 crc kubenswrapper[4823]: I0126 16:54:00.216348 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bw9bx" Jan 26 16:54:00 crc kubenswrapper[4823]: I0126 16:54:00.241410 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bw9bx"] Jan 26 16:54:00 crc kubenswrapper[4823]: I0126 16:54:00.251269 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bw9bx"] Jan 26 16:54:01 crc kubenswrapper[4823]: I0126 16:54:01.572539 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b26cbd48-f73b-44b8-9758-c44a58311eb7" path="/var/lib/kubelet/pods/b26cbd48-f73b-44b8-9758-c44a58311eb7/volumes" Jan 26 16:54:04 crc kubenswrapper[4823]: I0126 16:54:04.508019 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:54:04 crc kubenswrapper[4823]: I0126 16:54:04.508620 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:54:34 crc kubenswrapper[4823]: I0126 16:54:34.508548 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:54:34 crc kubenswrapper[4823]: I0126 16:54:34.509031 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:55:04 crc kubenswrapper[4823]: I0126 16:55:04.508604 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:55:04 crc kubenswrapper[4823]: I0126 16:55:04.509211 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:55:04 crc kubenswrapper[4823]: I0126 16:55:04.509278 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 16:55:04 crc kubenswrapper[4823]: I0126 16:55:04.510680 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a0dfa55f0438eb89687dcb2f4179a461bb5c63362a881c36870444019269bf6"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:55:04 crc kubenswrapper[4823]: I0126 16:55:04.510743 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://2a0dfa55f0438eb89687dcb2f4179a461bb5c63362a881c36870444019269bf6" gracePeriod=600 Jan 26 16:55:04 crc kubenswrapper[4823]: I0126 16:55:04.797930 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="2a0dfa55f0438eb89687dcb2f4179a461bb5c63362a881c36870444019269bf6" exitCode=0 Jan 26 16:55:04 crc kubenswrapper[4823]: I0126 16:55:04.798042 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"2a0dfa55f0438eb89687dcb2f4179a461bb5c63362a881c36870444019269bf6"} Jan 26 16:55:04 crc kubenswrapper[4823]: I0126 16:55:04.798307 4823 scope.go:117] "RemoveContainer" containerID="a11d4f003c9136501f9f0a2995975700b09b1b5023019f6c7a1d4018867115f6" Jan 26 16:55:05 crc kubenswrapper[4823]: I0126 16:55:05.813268 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4"} Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.801270 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bzgb5"] Jan 26 16:55:48 crc kubenswrapper[4823]: E0126 16:55:48.802871 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26cbd48-f73b-44b8-9758-c44a58311eb7" containerName="extract-content" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.802894 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26cbd48-f73b-44b8-9758-c44a58311eb7" containerName="extract-content" Jan 26 16:55:48 crc kubenswrapper[4823]: E0126 16:55:48.802923 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26cbd48-f73b-44b8-9758-c44a58311eb7" containerName="registry-server" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.802930 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26cbd48-f73b-44b8-9758-c44a58311eb7" containerName="registry-server" Jan 26 16:55:48 crc kubenswrapper[4823]: E0126 16:55:48.802954 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26cbd48-f73b-44b8-9758-c44a58311eb7" containerName="extract-utilities" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.802963 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26cbd48-f73b-44b8-9758-c44a58311eb7" containerName="extract-utilities" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.803206 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26cbd48-f73b-44b8-9758-c44a58311eb7" containerName="registry-server" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.825061 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.835473 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bzgb5"] Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.885738 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-utilities\") pod \"certified-operators-bzgb5\" (UID: \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\") " pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.885960 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf45w\" (UniqueName: \"kubernetes.io/projected/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-kube-api-access-nf45w\") pod \"certified-operators-bzgb5\" (UID: \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\") " pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.886106 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-catalog-content\") pod \"certified-operators-bzgb5\" (UID: \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\") " pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.989020 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-catalog-content\") pod \"certified-operators-bzgb5\" (UID: \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\") " pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.989118 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-utilities\") pod \"certified-operators-bzgb5\" (UID: \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\") " pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.989187 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf45w\" (UniqueName: \"kubernetes.io/projected/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-kube-api-access-nf45w\") pod \"certified-operators-bzgb5\" (UID: \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\") " pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.990064 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-catalog-content\") pod \"certified-operators-bzgb5\" (UID: \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\") " pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:48 crc kubenswrapper[4823]: I0126 16:55:48.990115 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-utilities\") pod \"certified-operators-bzgb5\" (UID: \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\") " pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:49 crc kubenswrapper[4823]: I0126 16:55:49.012518 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf45w\" (UniqueName: \"kubernetes.io/projected/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-kube-api-access-nf45w\") pod \"certified-operators-bzgb5\" (UID: \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\") " pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:49 crc kubenswrapper[4823]: I0126 16:55:49.166518 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:49 crc kubenswrapper[4823]: I0126 16:55:49.719593 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bzgb5"] Jan 26 16:55:50 crc kubenswrapper[4823]: I0126 16:55:50.184320 4823 generic.go:334] "Generic (PLEG): container finished" podID="5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" containerID="23d7df1d621c040dbd38716f57198226d42e7c872f03f9326ab7ae0275f4fa6d" exitCode=0 Jan 26 16:55:50 crc kubenswrapper[4823]: I0126 16:55:50.184421 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgb5" event={"ID":"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0","Type":"ContainerDied","Data":"23d7df1d621c040dbd38716f57198226d42e7c872f03f9326ab7ae0275f4fa6d"} Jan 26 16:55:50 crc kubenswrapper[4823]: I0126 16:55:50.184454 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgb5" event={"ID":"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0","Type":"ContainerStarted","Data":"bcfbf504d0d5800b6dd2e5de6281e9bb76ad2c9a12f3cfdcfb2cacc4825b9c71"} Jan 26 16:55:51 crc kubenswrapper[4823]: I0126 16:55:51.197897 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgb5" event={"ID":"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0","Type":"ContainerStarted","Data":"bd6a54b591f5cdc942b80b9bfb000d5f28cfcfaad8e4457c4bd0452604d77367"} Jan 26 16:55:51 crc kubenswrapper[4823]: I0126 16:55:51.792843 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t76wn"] Jan 26 16:55:51 crc kubenswrapper[4823]: I0126 16:55:51.795406 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:55:51 crc kubenswrapper[4823]: I0126 16:55:51.806864 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t76wn"] Jan 26 16:55:51 crc kubenswrapper[4823]: I0126 16:55:51.949966 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbe2590-ba94-4af1-863a-b898905c8551-catalog-content\") pod \"redhat-operators-t76wn\" (UID: \"7bbe2590-ba94-4af1-863a-b898905c8551\") " pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:55:51 crc kubenswrapper[4823]: I0126 16:55:51.950228 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48sh5\" (UniqueName: \"kubernetes.io/projected/7bbe2590-ba94-4af1-863a-b898905c8551-kube-api-access-48sh5\") pod \"redhat-operators-t76wn\" (UID: \"7bbe2590-ba94-4af1-863a-b898905c8551\") " pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:55:51 crc kubenswrapper[4823]: I0126 16:55:51.950472 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbe2590-ba94-4af1-863a-b898905c8551-utilities\") pod \"redhat-operators-t76wn\" (UID: \"7bbe2590-ba94-4af1-863a-b898905c8551\") " pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:55:52 crc kubenswrapper[4823]: I0126 16:55:52.052114 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48sh5\" (UniqueName: \"kubernetes.io/projected/7bbe2590-ba94-4af1-863a-b898905c8551-kube-api-access-48sh5\") pod \"redhat-operators-t76wn\" (UID: \"7bbe2590-ba94-4af1-863a-b898905c8551\") " pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:55:52 crc kubenswrapper[4823]: I0126 16:55:52.052571 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbe2590-ba94-4af1-863a-b898905c8551-utilities\") pod \"redhat-operators-t76wn\" (UID: \"7bbe2590-ba94-4af1-863a-b898905c8551\") " pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:55:52 crc kubenswrapper[4823]: I0126 16:55:52.052658 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbe2590-ba94-4af1-863a-b898905c8551-catalog-content\") pod \"redhat-operators-t76wn\" (UID: \"7bbe2590-ba94-4af1-863a-b898905c8551\") " pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:55:52 crc kubenswrapper[4823]: I0126 16:55:52.052997 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbe2590-ba94-4af1-863a-b898905c8551-utilities\") pod \"redhat-operators-t76wn\" (UID: \"7bbe2590-ba94-4af1-863a-b898905c8551\") " pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:55:52 crc kubenswrapper[4823]: I0126 16:55:52.053102 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbe2590-ba94-4af1-863a-b898905c8551-catalog-content\") pod \"redhat-operators-t76wn\" (UID: \"7bbe2590-ba94-4af1-863a-b898905c8551\") " pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:55:52 crc kubenswrapper[4823]: I0126 16:55:52.086737 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48sh5\" (UniqueName: \"kubernetes.io/projected/7bbe2590-ba94-4af1-863a-b898905c8551-kube-api-access-48sh5\") pod \"redhat-operators-t76wn\" (UID: \"7bbe2590-ba94-4af1-863a-b898905c8551\") " pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:55:52 crc kubenswrapper[4823]: I0126 16:55:52.115442 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:55:52 crc kubenswrapper[4823]: I0126 16:55:52.224841 4823 generic.go:334] "Generic (PLEG): container finished" podID="5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" containerID="bd6a54b591f5cdc942b80b9bfb000d5f28cfcfaad8e4457c4bd0452604d77367" exitCode=0 Jan 26 16:55:52 crc kubenswrapper[4823]: I0126 16:55:52.224887 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgb5" event={"ID":"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0","Type":"ContainerDied","Data":"bd6a54b591f5cdc942b80b9bfb000d5f28cfcfaad8e4457c4bd0452604d77367"} Jan 26 16:55:52 crc kubenswrapper[4823]: I0126 16:55:52.570582 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t76wn"] Jan 26 16:55:53 crc kubenswrapper[4823]: I0126 16:55:53.234160 4823 generic.go:334] "Generic (PLEG): container finished" podID="7bbe2590-ba94-4af1-863a-b898905c8551" containerID="5c35ce70c3acb0acb2a900fdea6649257201aed77e208e1d6110ae06e4510af4" exitCode=0 Jan 26 16:55:53 crc kubenswrapper[4823]: I0126 16:55:53.234240 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76wn" event={"ID":"7bbe2590-ba94-4af1-863a-b898905c8551","Type":"ContainerDied","Data":"5c35ce70c3acb0acb2a900fdea6649257201aed77e208e1d6110ae06e4510af4"} Jan 26 16:55:53 crc kubenswrapper[4823]: I0126 16:55:53.234539 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76wn" event={"ID":"7bbe2590-ba94-4af1-863a-b898905c8551","Type":"ContainerStarted","Data":"afe3459bbe124d73db483eb12270d92b726ff6a075e0628339324d9a34ffeee4"} Jan 26 16:55:53 crc kubenswrapper[4823]: I0126 16:55:53.237227 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgb5" event={"ID":"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0","Type":"ContainerStarted","Data":"7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9"} Jan 26 16:55:53 crc kubenswrapper[4823]: I0126 16:55:53.285330 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bzgb5" podStartSLOduration=2.803642387 podStartE2EDuration="5.285311227s" podCreationTimestamp="2026-01-26 16:55:48 +0000 UTC" firstStartedPulling="2026-01-26 16:55:50.186066079 +0000 UTC m=+7746.871529184" lastFinishedPulling="2026-01-26 16:55:52.667734919 +0000 UTC m=+7749.353198024" observedRunningTime="2026-01-26 16:55:53.275682594 +0000 UTC m=+7749.961145719" watchObservedRunningTime="2026-01-26 16:55:53.285311227 +0000 UTC m=+7749.970774332" Jan 26 16:55:54 crc kubenswrapper[4823]: I0126 16:55:54.248699 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76wn" event={"ID":"7bbe2590-ba94-4af1-863a-b898905c8551","Type":"ContainerStarted","Data":"0469e181aa8ae2de2cd0895e8ff443e4b00a1bfee28d5d483201ffbef9561bb9"} Jan 26 16:55:55 crc kubenswrapper[4823]: I0126 16:55:55.257836 4823 generic.go:334] "Generic (PLEG): container finished" podID="7bbe2590-ba94-4af1-863a-b898905c8551" containerID="0469e181aa8ae2de2cd0895e8ff443e4b00a1bfee28d5d483201ffbef9561bb9" exitCode=0 Jan 26 16:55:55 crc kubenswrapper[4823]: I0126 16:55:55.257892 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76wn" event={"ID":"7bbe2590-ba94-4af1-863a-b898905c8551","Type":"ContainerDied","Data":"0469e181aa8ae2de2cd0895e8ff443e4b00a1bfee28d5d483201ffbef9561bb9"} Jan 26 16:55:56 crc kubenswrapper[4823]: I0126 16:55:56.268653 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76wn" event={"ID":"7bbe2590-ba94-4af1-863a-b898905c8551","Type":"ContainerStarted","Data":"1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480"} Jan 26 16:55:56 crc kubenswrapper[4823]: I0126 16:55:56.300582 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t76wn" podStartSLOduration=2.776948072 podStartE2EDuration="5.30054051s" podCreationTimestamp="2026-01-26 16:55:51 +0000 UTC" firstStartedPulling="2026-01-26 16:55:53.236345554 +0000 UTC m=+7749.921808659" lastFinishedPulling="2026-01-26 16:55:55.759937982 +0000 UTC m=+7752.445401097" observedRunningTime="2026-01-26 16:55:56.289517388 +0000 UTC m=+7752.974980503" watchObservedRunningTime="2026-01-26 16:55:56.30054051 +0000 UTC m=+7752.986003615" Jan 26 16:55:59 crc kubenswrapper[4823]: I0126 16:55:59.166875 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:59 crc kubenswrapper[4823]: I0126 16:55:59.167394 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:59 crc kubenswrapper[4823]: I0126 16:55:59.212732 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:55:59 crc kubenswrapper[4823]: I0126 16:55:59.360090 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:56:00 crc kubenswrapper[4823]: I0126 16:56:00.183695 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bzgb5"] Jan 26 16:56:01 crc kubenswrapper[4823]: I0126 16:56:01.329207 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bzgb5" podUID="5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" containerName="registry-server" containerID="cri-o://7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9" gracePeriod=2 Jan 26 16:56:02 crc kubenswrapper[4823]: I0126 16:56:02.116570 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:56:02 crc kubenswrapper[4823]: I0126 16:56:02.117119 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:56:02 crc kubenswrapper[4823]: I0126 16:56:02.208092 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:56:02 crc kubenswrapper[4823]: I0126 16:56:02.390822 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:56:03 crc kubenswrapper[4823]: I0126 16:56:03.595997 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t76wn"] Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.142391 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.321330 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-utilities\") pod \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\" (UID: \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\") " Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.321666 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf45w\" (UniqueName: \"kubernetes.io/projected/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-kube-api-access-nf45w\") pod \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\" (UID: \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\") " Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.321791 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-catalog-content\") pod \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\" (UID: \"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0\") " Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.322324 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-utilities" (OuterVolumeSpecName: "utilities") pod "5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" (UID: "5ee5b9b3-559a-4df2-9a1c-5b49796e52e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.328382 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-kube-api-access-nf45w" (OuterVolumeSpecName: "kube-api-access-nf45w") pod "5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" (UID: "5ee5b9b3-559a-4df2-9a1c-5b49796e52e0"). InnerVolumeSpecName "kube-api-access-nf45w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.364127 4823 generic.go:334] "Generic (PLEG): container finished" podID="5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" containerID="7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9" exitCode=0 Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.364226 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bzgb5" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.364264 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgb5" event={"ID":"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0","Type":"ContainerDied","Data":"7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9"} Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.364318 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bzgb5" event={"ID":"5ee5b9b3-559a-4df2-9a1c-5b49796e52e0","Type":"ContainerDied","Data":"bcfbf504d0d5800b6dd2e5de6281e9bb76ad2c9a12f3cfdcfb2cacc4825b9c71"} Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.364348 4823 scope.go:117] "RemoveContainer" containerID="7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.375618 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" (UID: "5ee5b9b3-559a-4df2-9a1c-5b49796e52e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.391924 4823 scope.go:117] "RemoveContainer" containerID="bd6a54b591f5cdc942b80b9bfb000d5f28cfcfaad8e4457c4bd0452604d77367" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.411711 4823 scope.go:117] "RemoveContainer" containerID="23d7df1d621c040dbd38716f57198226d42e7c872f03f9326ab7ae0275f4fa6d" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.425102 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.425127 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.425137 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf45w\" (UniqueName: \"kubernetes.io/projected/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0-kube-api-access-nf45w\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.461370 4823 scope.go:117] "RemoveContainer" containerID="7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9" Jan 26 16:56:04 crc kubenswrapper[4823]: E0126 16:56:04.461834 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9\": container with ID starting with 7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9 not found: ID does not exist" containerID="7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.461874 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9"} err="failed to get container status \"7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9\": rpc error: code = NotFound desc = could not find container \"7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9\": container with ID starting with 7790c564331c51e423c7954a63ab17679fd68c7ae87c8015c97445fdbac358a9 not found: ID does not exist" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.462635 4823 scope.go:117] "RemoveContainer" containerID="bd6a54b591f5cdc942b80b9bfb000d5f28cfcfaad8e4457c4bd0452604d77367" Jan 26 16:56:04 crc kubenswrapper[4823]: E0126 16:56:04.462912 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd6a54b591f5cdc942b80b9bfb000d5f28cfcfaad8e4457c4bd0452604d77367\": container with ID starting with bd6a54b591f5cdc942b80b9bfb000d5f28cfcfaad8e4457c4bd0452604d77367 not found: ID does not exist" containerID="bd6a54b591f5cdc942b80b9bfb000d5f28cfcfaad8e4457c4bd0452604d77367" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.462940 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd6a54b591f5cdc942b80b9bfb000d5f28cfcfaad8e4457c4bd0452604d77367"} err="failed to get container status \"bd6a54b591f5cdc942b80b9bfb000d5f28cfcfaad8e4457c4bd0452604d77367\": rpc error: code = NotFound desc = could not find container \"bd6a54b591f5cdc942b80b9bfb000d5f28cfcfaad8e4457c4bd0452604d77367\": container with ID starting with bd6a54b591f5cdc942b80b9bfb000d5f28cfcfaad8e4457c4bd0452604d77367 not found: ID does not exist" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.462954 4823 scope.go:117] "RemoveContainer" containerID="23d7df1d621c040dbd38716f57198226d42e7c872f03f9326ab7ae0275f4fa6d" Jan 26 16:56:04 crc kubenswrapper[4823]: E0126 16:56:04.463202 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23d7df1d621c040dbd38716f57198226d42e7c872f03f9326ab7ae0275f4fa6d\": container with ID starting with 23d7df1d621c040dbd38716f57198226d42e7c872f03f9326ab7ae0275f4fa6d not found: ID does not exist" containerID="23d7df1d621c040dbd38716f57198226d42e7c872f03f9326ab7ae0275f4fa6d" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.463224 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23d7df1d621c040dbd38716f57198226d42e7c872f03f9326ab7ae0275f4fa6d"} err="failed to get container status \"23d7df1d621c040dbd38716f57198226d42e7c872f03f9326ab7ae0275f4fa6d\": rpc error: code = NotFound desc = could not find container \"23d7df1d621c040dbd38716f57198226d42e7c872f03f9326ab7ae0275f4fa6d\": container with ID starting with 23d7df1d621c040dbd38716f57198226d42e7c872f03f9326ab7ae0275f4fa6d not found: ID does not exist" Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.696063 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bzgb5"] Jan 26 16:56:04 crc kubenswrapper[4823]: I0126 16:56:04.704077 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bzgb5"] Jan 26 16:56:05 crc kubenswrapper[4823]: I0126 16:56:05.376595 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t76wn" podUID="7bbe2590-ba94-4af1-863a-b898905c8551" containerName="registry-server" containerID="cri-o://1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480" gracePeriod=2 Jan 26 16:56:05 crc kubenswrapper[4823]: I0126 16:56:05.582045 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" path="/var/lib/kubelet/pods/5ee5b9b3-559a-4df2-9a1c-5b49796e52e0/volumes" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.354860 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.373180 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbe2590-ba94-4af1-863a-b898905c8551-catalog-content\") pod \"7bbe2590-ba94-4af1-863a-b898905c8551\" (UID: \"7bbe2590-ba94-4af1-863a-b898905c8551\") " Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.373528 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48sh5\" (UniqueName: \"kubernetes.io/projected/7bbe2590-ba94-4af1-863a-b898905c8551-kube-api-access-48sh5\") pod \"7bbe2590-ba94-4af1-863a-b898905c8551\" (UID: \"7bbe2590-ba94-4af1-863a-b898905c8551\") " Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.373582 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbe2590-ba94-4af1-863a-b898905c8551-utilities\") pod \"7bbe2590-ba94-4af1-863a-b898905c8551\" (UID: \"7bbe2590-ba94-4af1-863a-b898905c8551\") " Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.375232 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bbe2590-ba94-4af1-863a-b898905c8551-utilities" (OuterVolumeSpecName: "utilities") pod "7bbe2590-ba94-4af1-863a-b898905c8551" (UID: "7bbe2590-ba94-4af1-863a-b898905c8551"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.382554 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bbe2590-ba94-4af1-863a-b898905c8551-kube-api-access-48sh5" (OuterVolumeSpecName: "kube-api-access-48sh5") pod "7bbe2590-ba94-4af1-863a-b898905c8551" (UID: "7bbe2590-ba94-4af1-863a-b898905c8551"). InnerVolumeSpecName "kube-api-access-48sh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.394915 4823 generic.go:334] "Generic (PLEG): container finished" podID="7bbe2590-ba94-4af1-863a-b898905c8551" containerID="1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480" exitCode=0 Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.394963 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76wn" event={"ID":"7bbe2590-ba94-4af1-863a-b898905c8551","Type":"ContainerDied","Data":"1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480"} Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.394996 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t76wn" event={"ID":"7bbe2590-ba94-4af1-863a-b898905c8551","Type":"ContainerDied","Data":"afe3459bbe124d73db483eb12270d92b726ff6a075e0628339324d9a34ffeee4"} Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.395014 4823 scope.go:117] "RemoveContainer" containerID="1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.395148 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t76wn" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.435652 4823 scope.go:117] "RemoveContainer" containerID="0469e181aa8ae2de2cd0895e8ff443e4b00a1bfee28d5d483201ffbef9561bb9" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.454186 4823 scope.go:117] "RemoveContainer" containerID="5c35ce70c3acb0acb2a900fdea6649257201aed77e208e1d6110ae06e4510af4" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.475246 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48sh5\" (UniqueName: \"kubernetes.io/projected/7bbe2590-ba94-4af1-863a-b898905c8551-kube-api-access-48sh5\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.475515 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bbe2590-ba94-4af1-863a-b898905c8551-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.487080 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bbe2590-ba94-4af1-863a-b898905c8551-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bbe2590-ba94-4af1-863a-b898905c8551" (UID: "7bbe2590-ba94-4af1-863a-b898905c8551"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.503070 4823 scope.go:117] "RemoveContainer" containerID="1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480" Jan 26 16:56:06 crc kubenswrapper[4823]: E0126 16:56:06.503713 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480\": container with ID starting with 1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480 not found: ID does not exist" containerID="1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.503796 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480"} err="failed to get container status \"1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480\": rpc error: code = NotFound desc = could not find container \"1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480\": container with ID starting with 1a3a0ab8d7543943b4de2ad565c7c8e82f914c643e8a69fad636356770ac5480 not found: ID does not exist" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.503846 4823 scope.go:117] "RemoveContainer" containerID="0469e181aa8ae2de2cd0895e8ff443e4b00a1bfee28d5d483201ffbef9561bb9" Jan 26 16:56:06 crc kubenswrapper[4823]: E0126 16:56:06.504514 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0469e181aa8ae2de2cd0895e8ff443e4b00a1bfee28d5d483201ffbef9561bb9\": container with ID starting with 0469e181aa8ae2de2cd0895e8ff443e4b00a1bfee28d5d483201ffbef9561bb9 not found: ID does not exist" containerID="0469e181aa8ae2de2cd0895e8ff443e4b00a1bfee28d5d483201ffbef9561bb9" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.504696 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0469e181aa8ae2de2cd0895e8ff443e4b00a1bfee28d5d483201ffbef9561bb9"} err="failed to get container status \"0469e181aa8ae2de2cd0895e8ff443e4b00a1bfee28d5d483201ffbef9561bb9\": rpc error: code = NotFound desc = could not find container \"0469e181aa8ae2de2cd0895e8ff443e4b00a1bfee28d5d483201ffbef9561bb9\": container with ID starting with 0469e181aa8ae2de2cd0895e8ff443e4b00a1bfee28d5d483201ffbef9561bb9 not found: ID does not exist" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.504835 4823 scope.go:117] "RemoveContainer" containerID="5c35ce70c3acb0acb2a900fdea6649257201aed77e208e1d6110ae06e4510af4" Jan 26 16:56:06 crc kubenswrapper[4823]: E0126 16:56:06.505357 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c35ce70c3acb0acb2a900fdea6649257201aed77e208e1d6110ae06e4510af4\": container with ID starting with 5c35ce70c3acb0acb2a900fdea6649257201aed77e208e1d6110ae06e4510af4 not found: ID does not exist" containerID="5c35ce70c3acb0acb2a900fdea6649257201aed77e208e1d6110ae06e4510af4" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.505449 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c35ce70c3acb0acb2a900fdea6649257201aed77e208e1d6110ae06e4510af4"} err="failed to get container status \"5c35ce70c3acb0acb2a900fdea6649257201aed77e208e1d6110ae06e4510af4\": rpc error: code = NotFound desc = could not find container \"5c35ce70c3acb0acb2a900fdea6649257201aed77e208e1d6110ae06e4510af4\": container with ID starting with 5c35ce70c3acb0acb2a900fdea6649257201aed77e208e1d6110ae06e4510af4 not found: ID does not exist" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.578211 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bbe2590-ba94-4af1-863a-b898905c8551-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.740318 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t76wn"] Jan 26 16:56:06 crc kubenswrapper[4823]: I0126 16:56:06.748133 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t76wn"] Jan 26 16:56:07 crc kubenswrapper[4823]: I0126 16:56:07.573612 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bbe2590-ba94-4af1-863a-b898905c8551" path="/var/lib/kubelet/pods/7bbe2590-ba94-4af1-863a-b898905c8551/volumes" Jan 26 16:57:04 crc kubenswrapper[4823]: I0126 16:57:04.507892 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:57:04 crc kubenswrapper[4823]: I0126 16:57:04.508708 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:57:34 crc kubenswrapper[4823]: I0126 16:57:34.508017 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:57:34 crc kubenswrapper[4823]: I0126 16:57:34.508813 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:58:04 crc kubenswrapper[4823]: I0126 16:58:04.508510 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 16:58:04 crc kubenswrapper[4823]: I0126 16:58:04.509652 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 16:58:04 crc kubenswrapper[4823]: I0126 16:58:04.509716 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 16:58:04 crc kubenswrapper[4823]: I0126 16:58:04.510463 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 16:58:04 crc kubenswrapper[4823]: I0126 16:58:04.510511 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" gracePeriod=600 Jan 26 16:58:04 crc kubenswrapper[4823]: E0126 16:58:04.665308 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:58:05 crc kubenswrapper[4823]: I0126 16:58:05.480299 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" exitCode=0 Jan 26 16:58:05 crc kubenswrapper[4823]: I0126 16:58:05.480405 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4"} Jan 26 16:58:05 crc kubenswrapper[4823]: I0126 16:58:05.480735 4823 scope.go:117] "RemoveContainer" containerID="2a0dfa55f0438eb89687dcb2f4179a461bb5c63362a881c36870444019269bf6" Jan 26 16:58:05 crc kubenswrapper[4823]: I0126 16:58:05.481465 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 16:58:05 crc kubenswrapper[4823]: E0126 16:58:05.481848 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:58:20 crc kubenswrapper[4823]: I0126 16:58:20.560769 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 16:58:20 crc kubenswrapper[4823]: E0126 16:58:20.561950 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.751083 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5t7th"] Jan 26 16:58:25 crc kubenswrapper[4823]: E0126 16:58:25.751967 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" containerName="extract-content" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.751980 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" containerName="extract-content" Jan 26 16:58:25 crc kubenswrapper[4823]: E0126 16:58:25.751996 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbe2590-ba94-4af1-863a-b898905c8551" containerName="extract-utilities" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.752002 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbe2590-ba94-4af1-863a-b898905c8551" containerName="extract-utilities" Jan 26 16:58:25 crc kubenswrapper[4823]: E0126 16:58:25.752024 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" containerName="extract-utilities" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.752031 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" containerName="extract-utilities" Jan 26 16:58:25 crc kubenswrapper[4823]: E0126 16:58:25.752043 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbe2590-ba94-4af1-863a-b898905c8551" containerName="registry-server" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.752050 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbe2590-ba94-4af1-863a-b898905c8551" containerName="registry-server" Jan 26 16:58:25 crc kubenswrapper[4823]: E0126 16:58:25.752058 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bbe2590-ba94-4af1-863a-b898905c8551" containerName="extract-content" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.752064 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bbe2590-ba94-4af1-863a-b898905c8551" containerName="extract-content" Jan 26 16:58:25 crc kubenswrapper[4823]: E0126 16:58:25.752079 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" containerName="registry-server" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.752085 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" containerName="registry-server" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.752242 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ee5b9b3-559a-4df2-9a1c-5b49796e52e0" containerName="registry-server" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.752261 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bbe2590-ba94-4af1-863a-b898905c8551" containerName="registry-server" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.753625 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.765877 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t7th"] Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.776923 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrjnm\" (UniqueName: \"kubernetes.io/projected/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-kube-api-access-mrjnm\") pod \"redhat-marketplace-5t7th\" (UID: \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\") " pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.777022 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-utilities\") pod \"redhat-marketplace-5t7th\" (UID: \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\") " pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.777176 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-catalog-content\") pod \"redhat-marketplace-5t7th\" (UID: \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\") " pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.879597 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-catalog-content\") pod \"redhat-marketplace-5t7th\" (UID: \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\") " pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.879711 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrjnm\" (UniqueName: \"kubernetes.io/projected/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-kube-api-access-mrjnm\") pod \"redhat-marketplace-5t7th\" (UID: \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\") " pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.879780 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-utilities\") pod \"redhat-marketplace-5t7th\" (UID: \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\") " pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.880200 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-utilities\") pod \"redhat-marketplace-5t7th\" (UID: \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\") " pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.880288 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-catalog-content\") pod \"redhat-marketplace-5t7th\" (UID: \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\") " pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:25 crc kubenswrapper[4823]: I0126 16:58:25.898886 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrjnm\" (UniqueName: \"kubernetes.io/projected/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-kube-api-access-mrjnm\") pod \"redhat-marketplace-5t7th\" (UID: \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\") " pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:26 crc kubenswrapper[4823]: I0126 16:58:26.084458 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:26 crc kubenswrapper[4823]: I0126 16:58:26.565198 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t7th"] Jan 26 16:58:26 crc kubenswrapper[4823]: W0126 16:58:26.565313 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod985ad03d_ca25_4f90_b3b5_8a1f59e8b29f.slice/crio-afd9656f2a2910a0969e412263be283437f7381f2e1378af18cecdcfc3283e5d WatchSource:0}: Error finding container afd9656f2a2910a0969e412263be283437f7381f2e1378af18cecdcfc3283e5d: Status 404 returned error can't find the container with id afd9656f2a2910a0969e412263be283437f7381f2e1378af18cecdcfc3283e5d Jan 26 16:58:26 crc kubenswrapper[4823]: I0126 16:58:26.687981 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t7th" event={"ID":"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f","Type":"ContainerStarted","Data":"afd9656f2a2910a0969e412263be283437f7381f2e1378af18cecdcfc3283e5d"} Jan 26 16:58:27 crc kubenswrapper[4823]: I0126 16:58:27.699606 4823 generic.go:334] "Generic (PLEG): container finished" podID="985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" containerID="15ed89542c5fb80e25efd7adc8fc7e9d1279d7729147e9ca13d6dcd1ee930eeb" exitCode=0 Jan 26 16:58:27 crc kubenswrapper[4823]: I0126 16:58:27.699711 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t7th" event={"ID":"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f","Type":"ContainerDied","Data":"15ed89542c5fb80e25efd7adc8fc7e9d1279d7729147e9ca13d6dcd1ee930eeb"} Jan 26 16:58:28 crc kubenswrapper[4823]: I0126 16:58:28.714635 4823 generic.go:334] "Generic (PLEG): container finished" podID="985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" containerID="312669e1c495143de5ac59516c3ead1b603dfc52a5dcf470b44a2fc0618bec14" exitCode=0 Jan 26 16:58:28 crc kubenswrapper[4823]: I0126 16:58:28.714758 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t7th" event={"ID":"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f","Type":"ContainerDied","Data":"312669e1c495143de5ac59516c3ead1b603dfc52a5dcf470b44a2fc0618bec14"} Jan 26 16:58:29 crc kubenswrapper[4823]: I0126 16:58:29.725651 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t7th" event={"ID":"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f","Type":"ContainerStarted","Data":"ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4"} Jan 26 16:58:29 crc kubenswrapper[4823]: I0126 16:58:29.746980 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5t7th" podStartSLOduration=3.272834285 podStartE2EDuration="4.746959009s" podCreationTimestamp="2026-01-26 16:58:25 +0000 UTC" firstStartedPulling="2026-01-26 16:58:27.701735091 +0000 UTC m=+7904.387198196" lastFinishedPulling="2026-01-26 16:58:29.175859815 +0000 UTC m=+7905.861322920" observedRunningTime="2026-01-26 16:58:29.740384269 +0000 UTC m=+7906.425847374" watchObservedRunningTime="2026-01-26 16:58:29.746959009 +0000 UTC m=+7906.432422114" Jan 26 16:58:35 crc kubenswrapper[4823]: I0126 16:58:35.560933 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 16:58:35 crc kubenswrapper[4823]: E0126 16:58:35.562050 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:58:36 crc kubenswrapper[4823]: I0126 16:58:36.085458 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:36 crc kubenswrapper[4823]: I0126 16:58:36.085539 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:36 crc kubenswrapper[4823]: I0126 16:58:36.159895 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:36 crc kubenswrapper[4823]: I0126 16:58:36.872086 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:36 crc kubenswrapper[4823]: I0126 16:58:36.934916 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t7th"] Jan 26 16:58:38 crc kubenswrapper[4823]: I0126 16:58:38.804787 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5t7th" podUID="985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" containerName="registry-server" containerID="cri-o://ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4" gracePeriod=2 Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.373668 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.567616 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrjnm\" (UniqueName: \"kubernetes.io/projected/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-kube-api-access-mrjnm\") pod \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\" (UID: \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\") " Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.567864 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-catalog-content\") pod \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\" (UID: \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\") " Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.567907 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-utilities\") pod \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\" (UID: \"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f\") " Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.569309 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-utilities" (OuterVolumeSpecName: "utilities") pod "985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" (UID: "985ad03d-ca25-4f90-b3b5-8a1f59e8b29f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.584693 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-kube-api-access-mrjnm" (OuterVolumeSpecName: "kube-api-access-mrjnm") pod "985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" (UID: "985ad03d-ca25-4f90-b3b5-8a1f59e8b29f"). InnerVolumeSpecName "kube-api-access-mrjnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.596395 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" (UID: "985ad03d-ca25-4f90-b3b5-8a1f59e8b29f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.670073 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrjnm\" (UniqueName: \"kubernetes.io/projected/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-kube-api-access-mrjnm\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.670333 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.670448 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.815124 4823 generic.go:334] "Generic (PLEG): container finished" podID="985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" containerID="ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4" exitCode=0 Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.815166 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t7th" event={"ID":"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f","Type":"ContainerDied","Data":"ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4"} Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.815196 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5t7th" event={"ID":"985ad03d-ca25-4f90-b3b5-8a1f59e8b29f","Type":"ContainerDied","Data":"afd9656f2a2910a0969e412263be283437f7381f2e1378af18cecdcfc3283e5d"} Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.815212 4823 scope.go:117] "RemoveContainer" containerID="ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.815254 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5t7th" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.847488 4823 scope.go:117] "RemoveContainer" containerID="312669e1c495143de5ac59516c3ead1b603dfc52a5dcf470b44a2fc0618bec14" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.871423 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t7th"] Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.878600 4823 scope.go:117] "RemoveContainer" containerID="15ed89542c5fb80e25efd7adc8fc7e9d1279d7729147e9ca13d6dcd1ee930eeb" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.882636 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5t7th"] Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.923644 4823 scope.go:117] "RemoveContainer" containerID="ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4" Jan 26 16:58:39 crc kubenswrapper[4823]: E0126 16:58:39.924168 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4\": container with ID starting with ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4 not found: ID does not exist" containerID="ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.924222 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4"} err="failed to get container status \"ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4\": rpc error: code = NotFound desc = could not find container \"ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4\": container with ID starting with ba86ff3708c81de02b5bc5b343c6ce5062b738b2c2c1e25c5f1a675bacc1c4c4 not found: ID does not exist" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.924257 4823 scope.go:117] "RemoveContainer" containerID="312669e1c495143de5ac59516c3ead1b603dfc52a5dcf470b44a2fc0618bec14" Jan 26 16:58:39 crc kubenswrapper[4823]: E0126 16:58:39.924703 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"312669e1c495143de5ac59516c3ead1b603dfc52a5dcf470b44a2fc0618bec14\": container with ID starting with 312669e1c495143de5ac59516c3ead1b603dfc52a5dcf470b44a2fc0618bec14 not found: ID does not exist" containerID="312669e1c495143de5ac59516c3ead1b603dfc52a5dcf470b44a2fc0618bec14" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.924754 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"312669e1c495143de5ac59516c3ead1b603dfc52a5dcf470b44a2fc0618bec14"} err="failed to get container status \"312669e1c495143de5ac59516c3ead1b603dfc52a5dcf470b44a2fc0618bec14\": rpc error: code = NotFound desc = could not find container \"312669e1c495143de5ac59516c3ead1b603dfc52a5dcf470b44a2fc0618bec14\": container with ID starting with 312669e1c495143de5ac59516c3ead1b603dfc52a5dcf470b44a2fc0618bec14 not found: ID does not exist" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.924785 4823 scope.go:117] "RemoveContainer" containerID="15ed89542c5fb80e25efd7adc8fc7e9d1279d7729147e9ca13d6dcd1ee930eeb" Jan 26 16:58:39 crc kubenswrapper[4823]: E0126 16:58:39.925086 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15ed89542c5fb80e25efd7adc8fc7e9d1279d7729147e9ca13d6dcd1ee930eeb\": container with ID starting with 15ed89542c5fb80e25efd7adc8fc7e9d1279d7729147e9ca13d6dcd1ee930eeb not found: ID does not exist" containerID="15ed89542c5fb80e25efd7adc8fc7e9d1279d7729147e9ca13d6dcd1ee930eeb" Jan 26 16:58:39 crc kubenswrapper[4823]: I0126 16:58:39.925110 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15ed89542c5fb80e25efd7adc8fc7e9d1279d7729147e9ca13d6dcd1ee930eeb"} err="failed to get container status \"15ed89542c5fb80e25efd7adc8fc7e9d1279d7729147e9ca13d6dcd1ee930eeb\": rpc error: code = NotFound desc = could not find container \"15ed89542c5fb80e25efd7adc8fc7e9d1279d7729147e9ca13d6dcd1ee930eeb\": container with ID starting with 15ed89542c5fb80e25efd7adc8fc7e9d1279d7729147e9ca13d6dcd1ee930eeb not found: ID does not exist" Jan 26 16:58:41 crc kubenswrapper[4823]: I0126 16:58:41.574448 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" path="/var/lib/kubelet/pods/985ad03d-ca25-4f90-b3b5-8a1f59e8b29f/volumes" Jan 26 16:58:50 crc kubenswrapper[4823]: I0126 16:58:50.560247 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 16:58:50 crc kubenswrapper[4823]: E0126 16:58:50.561089 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:59:01 crc kubenswrapper[4823]: I0126 16:59:01.561173 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 16:59:01 crc kubenswrapper[4823]: E0126 16:59:01.562128 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:59:13 crc kubenswrapper[4823]: I0126 16:59:13.570551 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 16:59:13 crc kubenswrapper[4823]: E0126 16:59:13.571558 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:59:27 crc kubenswrapper[4823]: I0126 16:59:27.559896 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 16:59:27 crc kubenswrapper[4823]: E0126 16:59:27.560653 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:59:42 crc kubenswrapper[4823]: I0126 16:59:42.562509 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 16:59:42 crc kubenswrapper[4823]: E0126 16:59:42.563762 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 16:59:57 crc kubenswrapper[4823]: I0126 16:59:57.560733 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 16:59:57 crc kubenswrapper[4823]: E0126 16:59:57.562040 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.197964 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6"] Jan 26 17:00:00 crc kubenswrapper[4823]: E0126 17:00:00.199064 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" containerName="registry-server" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.199082 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" containerName="registry-server" Jan 26 17:00:00 crc kubenswrapper[4823]: E0126 17:00:00.199119 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" containerName="extract-content" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.199128 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" containerName="extract-content" Jan 26 17:00:00 crc kubenswrapper[4823]: E0126 17:00:00.199146 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" containerName="extract-utilities" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.199154 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" containerName="extract-utilities" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.199355 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="985ad03d-ca25-4f90-b3b5-8a1f59e8b29f" containerName="registry-server" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.200245 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.202864 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.203577 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.213868 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6"] Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.319528 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1013816e-b1b5-4182-9275-801ed193469f-secret-volume\") pod \"collect-profiles-29490780-pdms6\" (UID: \"1013816e-b1b5-4182-9275-801ed193469f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.319964 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1013816e-b1b5-4182-9275-801ed193469f-config-volume\") pod \"collect-profiles-29490780-pdms6\" (UID: \"1013816e-b1b5-4182-9275-801ed193469f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.320341 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9bs8\" (UniqueName: \"kubernetes.io/projected/1013816e-b1b5-4182-9275-801ed193469f-kube-api-access-f9bs8\") pod \"collect-profiles-29490780-pdms6\" (UID: \"1013816e-b1b5-4182-9275-801ed193469f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.422326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1013816e-b1b5-4182-9275-801ed193469f-secret-volume\") pod \"collect-profiles-29490780-pdms6\" (UID: \"1013816e-b1b5-4182-9275-801ed193469f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.422414 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1013816e-b1b5-4182-9275-801ed193469f-config-volume\") pod \"collect-profiles-29490780-pdms6\" (UID: \"1013816e-b1b5-4182-9275-801ed193469f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.422551 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9bs8\" (UniqueName: \"kubernetes.io/projected/1013816e-b1b5-4182-9275-801ed193469f-kube-api-access-f9bs8\") pod \"collect-profiles-29490780-pdms6\" (UID: \"1013816e-b1b5-4182-9275-801ed193469f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.423427 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1013816e-b1b5-4182-9275-801ed193469f-config-volume\") pod \"collect-profiles-29490780-pdms6\" (UID: \"1013816e-b1b5-4182-9275-801ed193469f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.429818 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1013816e-b1b5-4182-9275-801ed193469f-secret-volume\") pod \"collect-profiles-29490780-pdms6\" (UID: \"1013816e-b1b5-4182-9275-801ed193469f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.442199 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9bs8\" (UniqueName: \"kubernetes.io/projected/1013816e-b1b5-4182-9275-801ed193469f-kube-api-access-f9bs8\") pod \"collect-profiles-29490780-pdms6\" (UID: \"1013816e-b1b5-4182-9275-801ed193469f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.537930 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:00 crc kubenswrapper[4823]: I0126 17:00:00.996080 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6"] Jan 26 17:00:01 crc kubenswrapper[4823]: I0126 17:00:01.646195 4823 generic.go:334] "Generic (PLEG): container finished" podID="1013816e-b1b5-4182-9275-801ed193469f" containerID="1bdf9b2824aa2b31e57b3162875cdb6da43affee5fa1160944657a91ef9aa130" exitCode=0 Jan 26 17:00:01 crc kubenswrapper[4823]: I0126 17:00:01.646667 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" event={"ID":"1013816e-b1b5-4182-9275-801ed193469f","Type":"ContainerDied","Data":"1bdf9b2824aa2b31e57b3162875cdb6da43affee5fa1160944657a91ef9aa130"} Jan 26 17:00:01 crc kubenswrapper[4823]: I0126 17:00:01.646716 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" event={"ID":"1013816e-b1b5-4182-9275-801ed193469f","Type":"ContainerStarted","Data":"caa939f1374ac655796eac88de6de66094abd472d6b18f477ee18c4cd6e09948"} Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:02.999856 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:03.081530 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1013816e-b1b5-4182-9275-801ed193469f-secret-volume\") pod \"1013816e-b1b5-4182-9275-801ed193469f\" (UID: \"1013816e-b1b5-4182-9275-801ed193469f\") " Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:03.081645 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1013816e-b1b5-4182-9275-801ed193469f-config-volume\") pod \"1013816e-b1b5-4182-9275-801ed193469f\" (UID: \"1013816e-b1b5-4182-9275-801ed193469f\") " Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:03.081819 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9bs8\" (UniqueName: \"kubernetes.io/projected/1013816e-b1b5-4182-9275-801ed193469f-kube-api-access-f9bs8\") pod \"1013816e-b1b5-4182-9275-801ed193469f\" (UID: \"1013816e-b1b5-4182-9275-801ed193469f\") " Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:03.082430 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1013816e-b1b5-4182-9275-801ed193469f-config-volume" (OuterVolumeSpecName: "config-volume") pod "1013816e-b1b5-4182-9275-801ed193469f" (UID: "1013816e-b1b5-4182-9275-801ed193469f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:03.088679 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1013816e-b1b5-4182-9275-801ed193469f-kube-api-access-f9bs8" (OuterVolumeSpecName: "kube-api-access-f9bs8") pod "1013816e-b1b5-4182-9275-801ed193469f" (UID: "1013816e-b1b5-4182-9275-801ed193469f"). InnerVolumeSpecName "kube-api-access-f9bs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:03.088719 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1013816e-b1b5-4182-9275-801ed193469f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1013816e-b1b5-4182-9275-801ed193469f" (UID: "1013816e-b1b5-4182-9275-801ed193469f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:03.183590 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1013816e-b1b5-4182-9275-801ed193469f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:03.183984 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1013816e-b1b5-4182-9275-801ed193469f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:03.183999 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9bs8\" (UniqueName: \"kubernetes.io/projected/1013816e-b1b5-4182-9275-801ed193469f-kube-api-access-f9bs8\") on node \"crc\" DevicePath \"\"" Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:03.665250 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" event={"ID":"1013816e-b1b5-4182-9275-801ed193469f","Type":"ContainerDied","Data":"caa939f1374ac655796eac88de6de66094abd472d6b18f477ee18c4cd6e09948"} Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:03.665290 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caa939f1374ac655796eac88de6de66094abd472d6b18f477ee18c4cd6e09948" Jan 26 17:00:03 crc kubenswrapper[4823]: I0126 17:00:03.665684 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6" Jan 26 17:00:04 crc kubenswrapper[4823]: I0126 17:00:04.087626 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt"] Jan 26 17:00:04 crc kubenswrapper[4823]: I0126 17:00:04.099929 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490735-l88wt"] Jan 26 17:00:05 crc kubenswrapper[4823]: I0126 17:00:05.572779 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b2f3ca7-003f-48a8-afc6-a13f665b3c97" path="/var/lib/kubelet/pods/7b2f3ca7-003f-48a8-afc6-a13f665b3c97/volumes" Jan 26 17:00:09 crc kubenswrapper[4823]: I0126 17:00:09.549035 4823 scope.go:117] "RemoveContainer" containerID="0d311cd0cdc9767e4da707ed030cafd4ac323dfa30880d5da26348cbab7f57fa" Jan 26 17:00:09 crc kubenswrapper[4823]: I0126 17:00:09.577298 4823 scope.go:117] "RemoveContainer" containerID="86c5def06e50035c332043a92a14a761c00a20443be9de213cb39bfad7ac0ff6" Jan 26 17:00:09 crc kubenswrapper[4823]: I0126 17:00:09.600918 4823 scope.go:117] "RemoveContainer" containerID="1c11e3e25bea8b5ac3392d5c614fc99d457eac2a7605c992a02600309d6ed7ad" Jan 26 17:00:09 crc kubenswrapper[4823]: I0126 17:00:09.668121 4823 scope.go:117] "RemoveContainer" containerID="95c98b5c8456a2c609d7face73c8f501132c0c61de29efd2f181d5bedfb9f175" Jan 26 17:00:11 crc kubenswrapper[4823]: I0126 17:00:11.560993 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:00:11 crc kubenswrapper[4823]: E0126 17:00:11.561767 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:00:24 crc kubenswrapper[4823]: I0126 17:00:24.560702 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:00:24 crc kubenswrapper[4823]: E0126 17:00:24.561553 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:00:36 crc kubenswrapper[4823]: I0126 17:00:36.561318 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:00:36 crc kubenswrapper[4823]: E0126 17:00:36.563147 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:00:51 crc kubenswrapper[4823]: I0126 17:00:51.561562 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:00:51 crc kubenswrapper[4823]: E0126 17:00:51.562924 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.161587 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29490781-rft4l"] Jan 26 17:01:00 crc kubenswrapper[4823]: E0126 17:01:00.162546 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1013816e-b1b5-4182-9275-801ed193469f" containerName="collect-profiles" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.162561 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1013816e-b1b5-4182-9275-801ed193469f" containerName="collect-profiles" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.162750 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1013816e-b1b5-4182-9275-801ed193469f" containerName="collect-profiles" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.163402 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.191009 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490781-rft4l"] Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.276530 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-combined-ca-bundle\") pod \"keystone-cron-29490781-rft4l\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.276896 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-config-data\") pod \"keystone-cron-29490781-rft4l\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.277636 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-fernet-keys\") pod \"keystone-cron-29490781-rft4l\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.278710 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d29x8\" (UniqueName: \"kubernetes.io/projected/9e12af4b-204e-466a-9690-c2d44c25f1cd-kube-api-access-d29x8\") pod \"keystone-cron-29490781-rft4l\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.382105 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-fernet-keys\") pod \"keystone-cron-29490781-rft4l\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.382259 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d29x8\" (UniqueName: \"kubernetes.io/projected/9e12af4b-204e-466a-9690-c2d44c25f1cd-kube-api-access-d29x8\") pod \"keystone-cron-29490781-rft4l\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.382503 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-combined-ca-bundle\") pod \"keystone-cron-29490781-rft4l\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.382564 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-config-data\") pod \"keystone-cron-29490781-rft4l\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.388586 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-fernet-keys\") pod \"keystone-cron-29490781-rft4l\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.388777 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-combined-ca-bundle\") pod \"keystone-cron-29490781-rft4l\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.389441 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-config-data\") pod \"keystone-cron-29490781-rft4l\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.425571 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d29x8\" (UniqueName: \"kubernetes.io/projected/9e12af4b-204e-466a-9690-c2d44c25f1cd-kube-api-access-d29x8\") pod \"keystone-cron-29490781-rft4l\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.505827 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:00 crc kubenswrapper[4823]: I0126 17:01:00.962348 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29490781-rft4l"] Jan 26 17:01:01 crc kubenswrapper[4823]: I0126 17:01:01.232193 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490781-rft4l" event={"ID":"9e12af4b-204e-466a-9690-c2d44c25f1cd","Type":"ContainerStarted","Data":"f0f46feba548bd93e50b3815b86e5b859812e7108da276beed91fae3f39b2def"} Jan 26 17:01:01 crc kubenswrapper[4823]: I0126 17:01:01.232616 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490781-rft4l" event={"ID":"9e12af4b-204e-466a-9690-c2d44c25f1cd","Type":"ContainerStarted","Data":"ba44fe4a391d059879202a1703e713dc8a5afde6aa14408a42c47242df1dffa5"} Jan 26 17:01:01 crc kubenswrapper[4823]: I0126 17:01:01.253759 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29490781-rft4l" podStartSLOduration=1.253736787 podStartE2EDuration="1.253736787s" podCreationTimestamp="2026-01-26 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:01:01.24584008 +0000 UTC m=+8057.931303195" watchObservedRunningTime="2026-01-26 17:01:01.253736787 +0000 UTC m=+8057.939199902" Jan 26 17:01:04 crc kubenswrapper[4823]: I0126 17:01:04.265550 4823 generic.go:334] "Generic (PLEG): container finished" podID="9e12af4b-204e-466a-9690-c2d44c25f1cd" containerID="f0f46feba548bd93e50b3815b86e5b859812e7108da276beed91fae3f39b2def" exitCode=0 Jan 26 17:01:04 crc kubenswrapper[4823]: I0126 17:01:04.265827 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490781-rft4l" event={"ID":"9e12af4b-204e-466a-9690-c2d44c25f1cd","Type":"ContainerDied","Data":"f0f46feba548bd93e50b3815b86e5b859812e7108da276beed91fae3f39b2def"} Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.561210 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:01:05 crc kubenswrapper[4823]: E0126 17:01:05.561823 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.639391 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.812903 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-combined-ca-bundle\") pod \"9e12af4b-204e-466a-9690-c2d44c25f1cd\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.813828 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-config-data\") pod \"9e12af4b-204e-466a-9690-c2d44c25f1cd\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.813900 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-fernet-keys\") pod \"9e12af4b-204e-466a-9690-c2d44c25f1cd\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.814066 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d29x8\" (UniqueName: \"kubernetes.io/projected/9e12af4b-204e-466a-9690-c2d44c25f1cd-kube-api-access-d29x8\") pod \"9e12af4b-204e-466a-9690-c2d44c25f1cd\" (UID: \"9e12af4b-204e-466a-9690-c2d44c25f1cd\") " Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.819976 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "9e12af4b-204e-466a-9690-c2d44c25f1cd" (UID: "9e12af4b-204e-466a-9690-c2d44c25f1cd"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.820889 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e12af4b-204e-466a-9690-c2d44c25f1cd-kube-api-access-d29x8" (OuterVolumeSpecName: "kube-api-access-d29x8") pod "9e12af4b-204e-466a-9690-c2d44c25f1cd" (UID: "9e12af4b-204e-466a-9690-c2d44c25f1cd"). InnerVolumeSpecName "kube-api-access-d29x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.859496 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9e12af4b-204e-466a-9690-c2d44c25f1cd" (UID: "9e12af4b-204e-466a-9690-c2d44c25f1cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.887110 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-config-data" (OuterVolumeSpecName: "config-data") pod "9e12af4b-204e-466a-9690-c2d44c25f1cd" (UID: "9e12af4b-204e-466a-9690-c2d44c25f1cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.916762 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d29x8\" (UniqueName: \"kubernetes.io/projected/9e12af4b-204e-466a-9690-c2d44c25f1cd-kube-api-access-d29x8\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.917076 4823 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.917090 4823 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:05 crc kubenswrapper[4823]: I0126 17:01:05.917104 4823 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/9e12af4b-204e-466a-9690-c2d44c25f1cd-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 26 17:01:06 crc kubenswrapper[4823]: I0126 17:01:06.285195 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29490781-rft4l" event={"ID":"9e12af4b-204e-466a-9690-c2d44c25f1cd","Type":"ContainerDied","Data":"ba44fe4a391d059879202a1703e713dc8a5afde6aa14408a42c47242df1dffa5"} Jan 26 17:01:06 crc kubenswrapper[4823]: I0126 17:01:06.285235 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba44fe4a391d059879202a1703e713dc8a5afde6aa14408a42c47242df1dffa5" Jan 26 17:01:06 crc kubenswrapper[4823]: I0126 17:01:06.285258 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29490781-rft4l" Jan 26 17:01:20 crc kubenswrapper[4823]: I0126 17:01:20.561103 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:01:20 crc kubenswrapper[4823]: E0126 17:01:20.562026 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:01:32 crc kubenswrapper[4823]: I0126 17:01:32.561275 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:01:32 crc kubenswrapper[4823]: E0126 17:01:32.562214 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:01:44 crc kubenswrapper[4823]: I0126 17:01:44.560177 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:01:44 crc kubenswrapper[4823]: E0126 17:01:44.560869 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:01:56 crc kubenswrapper[4823]: I0126 17:01:56.561132 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:01:56 crc kubenswrapper[4823]: E0126 17:01:56.562124 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:02:09 crc kubenswrapper[4823]: I0126 17:02:09.560601 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:02:09 crc kubenswrapper[4823]: E0126 17:02:09.561237 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:02:23 crc kubenswrapper[4823]: I0126 17:02:23.580195 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:02:23 crc kubenswrapper[4823]: E0126 17:02:23.581492 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:02:37 crc kubenswrapper[4823]: I0126 17:02:37.560838 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:02:37 crc kubenswrapper[4823]: E0126 17:02:37.561775 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:02:52 crc kubenswrapper[4823]: I0126 17:02:52.561175 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:02:52 crc kubenswrapper[4823]: E0126 17:02:52.562208 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:03:04 crc kubenswrapper[4823]: I0126 17:03:04.560928 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:03:05 crc kubenswrapper[4823]: I0126 17:03:05.372908 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"460623c2f67aa3511e36c2bd5c852968a1a86a4029b520e99345aca2995320ec"} Jan 26 17:05:04 crc kubenswrapper[4823]: I0126 17:05:04.508895 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:05:04 crc kubenswrapper[4823]: I0126 17:05:04.509513 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:05:34 crc kubenswrapper[4823]: I0126 17:05:34.508348 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:05:34 crc kubenswrapper[4823]: I0126 17:05:34.508912 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:06:04 crc kubenswrapper[4823]: I0126 17:06:04.508582 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:06:04 crc kubenswrapper[4823]: I0126 17:06:04.509186 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:06:04 crc kubenswrapper[4823]: I0126 17:06:04.509237 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 17:06:04 crc kubenswrapper[4823]: I0126 17:06:04.510001 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"460623c2f67aa3511e36c2bd5c852968a1a86a4029b520e99345aca2995320ec"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:06:04 crc kubenswrapper[4823]: I0126 17:06:04.510081 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://460623c2f67aa3511e36c2bd5c852968a1a86a4029b520e99345aca2995320ec" gracePeriod=600 Jan 26 17:06:04 crc kubenswrapper[4823]: I0126 17:06:04.997057 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="460623c2f67aa3511e36c2bd5c852968a1a86a4029b520e99345aca2995320ec" exitCode=0 Jan 26 17:06:04 crc kubenswrapper[4823]: I0126 17:06:04.997134 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"460623c2f67aa3511e36c2bd5c852968a1a86a4029b520e99345aca2995320ec"} Jan 26 17:06:04 crc kubenswrapper[4823]: I0126 17:06:04.997999 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342"} Jan 26 17:06:04 crc kubenswrapper[4823]: I0126 17:06:04.998052 4823 scope.go:117] "RemoveContainer" containerID="d2b1614a75cdded77756e30dcaf9c9063f6b486696ac2365fbbed2044af66da4" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.692125 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-24d8s"] Jan 26 17:06:24 crc kubenswrapper[4823]: E0126 17:06:24.693295 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e12af4b-204e-466a-9690-c2d44c25f1cd" containerName="keystone-cron" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.693317 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e12af4b-204e-466a-9690-c2d44c25f1cd" containerName="keystone-cron" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.693658 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e12af4b-204e-466a-9690-c2d44c25f1cd" containerName="keystone-cron" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.695930 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.707685 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-24d8s"] Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.835760 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257403af-4e67-401d-ac09-d218f6f434cc-catalog-content\") pod \"certified-operators-24d8s\" (UID: \"257403af-4e67-401d-ac09-d218f6f434cc\") " pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.835956 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94nv6\" (UniqueName: \"kubernetes.io/projected/257403af-4e67-401d-ac09-d218f6f434cc-kube-api-access-94nv6\") pod \"certified-operators-24d8s\" (UID: \"257403af-4e67-401d-ac09-d218f6f434cc\") " pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.836118 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257403af-4e67-401d-ac09-d218f6f434cc-utilities\") pod \"certified-operators-24d8s\" (UID: \"257403af-4e67-401d-ac09-d218f6f434cc\") " pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.938583 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257403af-4e67-401d-ac09-d218f6f434cc-catalog-content\") pod \"certified-operators-24d8s\" (UID: \"257403af-4e67-401d-ac09-d218f6f434cc\") " pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.938709 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94nv6\" (UniqueName: \"kubernetes.io/projected/257403af-4e67-401d-ac09-d218f6f434cc-kube-api-access-94nv6\") pod \"certified-operators-24d8s\" (UID: \"257403af-4e67-401d-ac09-d218f6f434cc\") " pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.938758 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257403af-4e67-401d-ac09-d218f6f434cc-utilities\") pod \"certified-operators-24d8s\" (UID: \"257403af-4e67-401d-ac09-d218f6f434cc\") " pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.939219 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257403af-4e67-401d-ac09-d218f6f434cc-catalog-content\") pod \"certified-operators-24d8s\" (UID: \"257403af-4e67-401d-ac09-d218f6f434cc\") " pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.939323 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257403af-4e67-401d-ac09-d218f6f434cc-utilities\") pod \"certified-operators-24d8s\" (UID: \"257403af-4e67-401d-ac09-d218f6f434cc\") " pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:24 crc kubenswrapper[4823]: I0126 17:06:24.962172 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94nv6\" (UniqueName: \"kubernetes.io/projected/257403af-4e67-401d-ac09-d218f6f434cc-kube-api-access-94nv6\") pod \"certified-operators-24d8s\" (UID: \"257403af-4e67-401d-ac09-d218f6f434cc\") " pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:25 crc kubenswrapper[4823]: I0126 17:06:25.036687 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:25 crc kubenswrapper[4823]: I0126 17:06:25.622756 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-24d8s"] Jan 26 17:06:25 crc kubenswrapper[4823]: W0126 17:06:25.631791 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod257403af_4e67_401d_ac09_d218f6f434cc.slice/crio-ca156a0663e60f016f6a5fd1361851bf0f7154ebe3713bcf30c810ba57114f1c WatchSource:0}: Error finding container ca156a0663e60f016f6a5fd1361851bf0f7154ebe3713bcf30c810ba57114f1c: Status 404 returned error can't find the container with id ca156a0663e60f016f6a5fd1361851bf0f7154ebe3713bcf30c810ba57114f1c Jan 26 17:06:26 crc kubenswrapper[4823]: I0126 17:06:26.264876 4823 generic.go:334] "Generic (PLEG): container finished" podID="257403af-4e67-401d-ac09-d218f6f434cc" containerID="d1af486dc0f7a1bd5549503c847ea9fdec3438f55bde01b05e0556a92bf708ba" exitCode=0 Jan 26 17:06:26 crc kubenswrapper[4823]: I0126 17:06:26.264961 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24d8s" event={"ID":"257403af-4e67-401d-ac09-d218f6f434cc","Type":"ContainerDied","Data":"d1af486dc0f7a1bd5549503c847ea9fdec3438f55bde01b05e0556a92bf708ba"} Jan 26 17:06:26 crc kubenswrapper[4823]: I0126 17:06:26.266190 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24d8s" event={"ID":"257403af-4e67-401d-ac09-d218f6f434cc","Type":"ContainerStarted","Data":"ca156a0663e60f016f6a5fd1361851bf0f7154ebe3713bcf30c810ba57114f1c"} Jan 26 17:06:26 crc kubenswrapper[4823]: I0126 17:06:26.267502 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:06:28 crc kubenswrapper[4823]: I0126 17:06:28.296908 4823 generic.go:334] "Generic (PLEG): container finished" podID="257403af-4e67-401d-ac09-d218f6f434cc" containerID="2021e5390d27d49e97d9ddb1c61a0a94fbb848779010bef422e06236d7834b1e" exitCode=0 Jan 26 17:06:28 crc kubenswrapper[4823]: I0126 17:06:28.297035 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24d8s" event={"ID":"257403af-4e67-401d-ac09-d218f6f434cc","Type":"ContainerDied","Data":"2021e5390d27d49e97d9ddb1c61a0a94fbb848779010bef422e06236d7834b1e"} Jan 26 17:06:30 crc kubenswrapper[4823]: I0126 17:06:30.325422 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24d8s" event={"ID":"257403af-4e67-401d-ac09-d218f6f434cc","Type":"ContainerStarted","Data":"41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893"} Jan 26 17:06:30 crc kubenswrapper[4823]: I0126 17:06:30.353843 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-24d8s" podStartSLOduration=3.453081815 podStartE2EDuration="6.353821889s" podCreationTimestamp="2026-01-26 17:06:24 +0000 UTC" firstStartedPulling="2026-01-26 17:06:26.26725963 +0000 UTC m=+8382.952722725" lastFinishedPulling="2026-01-26 17:06:29.167999684 +0000 UTC m=+8385.853462799" observedRunningTime="2026-01-26 17:06:30.344676309 +0000 UTC m=+8387.030139414" watchObservedRunningTime="2026-01-26 17:06:30.353821889 +0000 UTC m=+8387.039284994" Jan 26 17:06:35 crc kubenswrapper[4823]: I0126 17:06:35.037404 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:35 crc kubenswrapper[4823]: I0126 17:06:35.037947 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:35 crc kubenswrapper[4823]: I0126 17:06:35.081864 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:35 crc kubenswrapper[4823]: I0126 17:06:35.428839 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:35 crc kubenswrapper[4823]: I0126 17:06:35.489951 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-24d8s"] Jan 26 17:06:37 crc kubenswrapper[4823]: I0126 17:06:37.403661 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-24d8s" podUID="257403af-4e67-401d-ac09-d218f6f434cc" containerName="registry-server" containerID="cri-o://41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893" gracePeriod=2 Jan 26 17:06:37 crc kubenswrapper[4823]: I0126 17:06:37.741987 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6brsc"] Jan 26 17:06:37 crc kubenswrapper[4823]: I0126 17:06:37.744507 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:37 crc kubenswrapper[4823]: I0126 17:06:37.752658 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6brsc"] Jan 26 17:06:37 crc kubenswrapper[4823]: I0126 17:06:37.926712 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5f1d19-54dd-4d15-83ee-c21178dfb606-utilities\") pod \"redhat-operators-6brsc\" (UID: \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\") " pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:37 crc kubenswrapper[4823]: I0126 17:06:37.927510 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5f1d19-54dd-4d15-83ee-c21178dfb606-catalog-content\") pod \"redhat-operators-6brsc\" (UID: \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\") " pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:37 crc kubenswrapper[4823]: I0126 17:06:37.927688 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-566nw\" (UniqueName: \"kubernetes.io/projected/fc5f1d19-54dd-4d15-83ee-c21178dfb606-kube-api-access-566nw\") pod \"redhat-operators-6brsc\" (UID: \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\") " pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:37 crc kubenswrapper[4823]: I0126 17:06:37.941212 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.030633 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5f1d19-54dd-4d15-83ee-c21178dfb606-catalog-content\") pod \"redhat-operators-6brsc\" (UID: \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\") " pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.030788 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-566nw\" (UniqueName: \"kubernetes.io/projected/fc5f1d19-54dd-4d15-83ee-c21178dfb606-kube-api-access-566nw\") pod \"redhat-operators-6brsc\" (UID: \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\") " pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.030997 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5f1d19-54dd-4d15-83ee-c21178dfb606-utilities\") pod \"redhat-operators-6brsc\" (UID: \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\") " pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.031128 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5f1d19-54dd-4d15-83ee-c21178dfb606-catalog-content\") pod \"redhat-operators-6brsc\" (UID: \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\") " pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.031534 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5f1d19-54dd-4d15-83ee-c21178dfb606-utilities\") pod \"redhat-operators-6brsc\" (UID: \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\") " pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.057837 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-566nw\" (UniqueName: \"kubernetes.io/projected/fc5f1d19-54dd-4d15-83ee-c21178dfb606-kube-api-access-566nw\") pod \"redhat-operators-6brsc\" (UID: \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\") " pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.084791 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.144147 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257403af-4e67-401d-ac09-d218f6f434cc-catalog-content\") pod \"257403af-4e67-401d-ac09-d218f6f434cc\" (UID: \"257403af-4e67-401d-ac09-d218f6f434cc\") " Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.144578 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257403af-4e67-401d-ac09-d218f6f434cc-utilities\") pod \"257403af-4e67-401d-ac09-d218f6f434cc\" (UID: \"257403af-4e67-401d-ac09-d218f6f434cc\") " Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.144689 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94nv6\" (UniqueName: \"kubernetes.io/projected/257403af-4e67-401d-ac09-d218f6f434cc-kube-api-access-94nv6\") pod \"257403af-4e67-401d-ac09-d218f6f434cc\" (UID: \"257403af-4e67-401d-ac09-d218f6f434cc\") " Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.147467 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257403af-4e67-401d-ac09-d218f6f434cc-utilities" (OuterVolumeSpecName: "utilities") pod "257403af-4e67-401d-ac09-d218f6f434cc" (UID: "257403af-4e67-401d-ac09-d218f6f434cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.151021 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/257403af-4e67-401d-ac09-d218f6f434cc-kube-api-access-94nv6" (OuterVolumeSpecName: "kube-api-access-94nv6") pod "257403af-4e67-401d-ac09-d218f6f434cc" (UID: "257403af-4e67-401d-ac09-d218f6f434cc"). InnerVolumeSpecName "kube-api-access-94nv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.220546 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/257403af-4e67-401d-ac09-d218f6f434cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "257403af-4e67-401d-ac09-d218f6f434cc" (UID: "257403af-4e67-401d-ac09-d218f6f434cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.251792 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94nv6\" (UniqueName: \"kubernetes.io/projected/257403af-4e67-401d-ac09-d218f6f434cc-kube-api-access-94nv6\") on node \"crc\" DevicePath \"\"" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.251959 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/257403af-4e67-401d-ac09-d218f6f434cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.251976 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/257403af-4e67-401d-ac09-d218f6f434cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.415080 4823 generic.go:334] "Generic (PLEG): container finished" podID="257403af-4e67-401d-ac09-d218f6f434cc" containerID="41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893" exitCode=0 Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.415118 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24d8s" event={"ID":"257403af-4e67-401d-ac09-d218f6f434cc","Type":"ContainerDied","Data":"41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893"} Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.415144 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-24d8s" event={"ID":"257403af-4e67-401d-ac09-d218f6f434cc","Type":"ContainerDied","Data":"ca156a0663e60f016f6a5fd1361851bf0f7154ebe3713bcf30c810ba57114f1c"} Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.415162 4823 scope.go:117] "RemoveContainer" containerID="41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.415278 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-24d8s" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.439823 4823 scope.go:117] "RemoveContainer" containerID="2021e5390d27d49e97d9ddb1c61a0a94fbb848779010bef422e06236d7834b1e" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.477742 4823 scope.go:117] "RemoveContainer" containerID="d1af486dc0f7a1bd5549503c847ea9fdec3438f55bde01b05e0556a92bf708ba" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.481115 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-24d8s"] Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.494530 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-24d8s"] Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.497193 4823 scope.go:117] "RemoveContainer" containerID="41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893" Jan 26 17:06:38 crc kubenswrapper[4823]: E0126 17:06:38.501202 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893\": container with ID starting with 41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893 not found: ID does not exist" containerID="41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.501320 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893"} err="failed to get container status \"41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893\": rpc error: code = NotFound desc = could not find container \"41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893\": container with ID starting with 41f061903836dea08c3a22ea62e600ca0a91a204080f61890a1d868b532aa893 not found: ID does not exist" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.501430 4823 scope.go:117] "RemoveContainer" containerID="2021e5390d27d49e97d9ddb1c61a0a94fbb848779010bef422e06236d7834b1e" Jan 26 17:06:38 crc kubenswrapper[4823]: E0126 17:06:38.502153 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2021e5390d27d49e97d9ddb1c61a0a94fbb848779010bef422e06236d7834b1e\": container with ID starting with 2021e5390d27d49e97d9ddb1c61a0a94fbb848779010bef422e06236d7834b1e not found: ID does not exist" containerID="2021e5390d27d49e97d9ddb1c61a0a94fbb848779010bef422e06236d7834b1e" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.502189 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2021e5390d27d49e97d9ddb1c61a0a94fbb848779010bef422e06236d7834b1e"} err="failed to get container status \"2021e5390d27d49e97d9ddb1c61a0a94fbb848779010bef422e06236d7834b1e\": rpc error: code = NotFound desc = could not find container \"2021e5390d27d49e97d9ddb1c61a0a94fbb848779010bef422e06236d7834b1e\": container with ID starting with 2021e5390d27d49e97d9ddb1c61a0a94fbb848779010bef422e06236d7834b1e not found: ID does not exist" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.502210 4823 scope.go:117] "RemoveContainer" containerID="d1af486dc0f7a1bd5549503c847ea9fdec3438f55bde01b05e0556a92bf708ba" Jan 26 17:06:38 crc kubenswrapper[4823]: E0126 17:06:38.502565 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1af486dc0f7a1bd5549503c847ea9fdec3438f55bde01b05e0556a92bf708ba\": container with ID starting with d1af486dc0f7a1bd5549503c847ea9fdec3438f55bde01b05e0556a92bf708ba not found: ID does not exist" containerID="d1af486dc0f7a1bd5549503c847ea9fdec3438f55bde01b05e0556a92bf708ba" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.502605 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1af486dc0f7a1bd5549503c847ea9fdec3438f55bde01b05e0556a92bf708ba"} err="failed to get container status \"d1af486dc0f7a1bd5549503c847ea9fdec3438f55bde01b05e0556a92bf708ba\": rpc error: code = NotFound desc = could not find container \"d1af486dc0f7a1bd5549503c847ea9fdec3438f55bde01b05e0556a92bf708ba\": container with ID starting with d1af486dc0f7a1bd5549503c847ea9fdec3438f55bde01b05e0556a92bf708ba not found: ID does not exist" Jan 26 17:06:38 crc kubenswrapper[4823]: I0126 17:06:38.591646 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6brsc"] Jan 26 17:06:39 crc kubenswrapper[4823]: I0126 17:06:39.426296 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6brsc" event={"ID":"fc5f1d19-54dd-4d15-83ee-c21178dfb606","Type":"ContainerStarted","Data":"3a30014f2d66a61f2d66f59a266f8fe1789b2725307e9043edf315caac5c98fb"} Jan 26 17:06:39 crc kubenswrapper[4823]: I0126 17:06:39.586808 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="257403af-4e67-401d-ac09-d218f6f434cc" path="/var/lib/kubelet/pods/257403af-4e67-401d-ac09-d218f6f434cc/volumes" Jan 26 17:06:40 crc kubenswrapper[4823]: I0126 17:06:40.444646 4823 generic.go:334] "Generic (PLEG): container finished" podID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" containerID="7e2cb8a02836a532ce9a9e80e6e691c56e4c0e78f5c5c2cd5a0f2bb8a2ca7da2" exitCode=0 Jan 26 17:06:40 crc kubenswrapper[4823]: I0126 17:06:40.444720 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6brsc" event={"ID":"fc5f1d19-54dd-4d15-83ee-c21178dfb606","Type":"ContainerDied","Data":"7e2cb8a02836a532ce9a9e80e6e691c56e4c0e78f5c5c2cd5a0f2bb8a2ca7da2"} Jan 26 17:06:41 crc kubenswrapper[4823]: I0126 17:06:41.455027 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6brsc" event={"ID":"fc5f1d19-54dd-4d15-83ee-c21178dfb606","Type":"ContainerStarted","Data":"0a18be2af2a777971fa5c9095ce411e81b9616818ce057cfa6b85ca6433eed5d"} Jan 26 17:06:42 crc kubenswrapper[4823]: I0126 17:06:42.463261 4823 generic.go:334] "Generic (PLEG): container finished" podID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" containerID="0a18be2af2a777971fa5c9095ce411e81b9616818ce057cfa6b85ca6433eed5d" exitCode=0 Jan 26 17:06:42 crc kubenswrapper[4823]: I0126 17:06:42.463474 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6brsc" event={"ID":"fc5f1d19-54dd-4d15-83ee-c21178dfb606","Type":"ContainerDied","Data":"0a18be2af2a777971fa5c9095ce411e81b9616818ce057cfa6b85ca6433eed5d"} Jan 26 17:06:43 crc kubenswrapper[4823]: I0126 17:06:43.475330 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6brsc" event={"ID":"fc5f1d19-54dd-4d15-83ee-c21178dfb606","Type":"ContainerStarted","Data":"cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2"} Jan 26 17:06:43 crc kubenswrapper[4823]: I0126 17:06:43.496583 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6brsc" podStartSLOduration=4.061130966 podStartE2EDuration="6.496558336s" podCreationTimestamp="2026-01-26 17:06:37 +0000 UTC" firstStartedPulling="2026-01-26 17:06:40.44734115 +0000 UTC m=+8397.132804255" lastFinishedPulling="2026-01-26 17:06:42.88276848 +0000 UTC m=+8399.568231625" observedRunningTime="2026-01-26 17:06:43.492038702 +0000 UTC m=+8400.177501807" watchObservedRunningTime="2026-01-26 17:06:43.496558336 +0000 UTC m=+8400.182021451" Jan 26 17:06:48 crc kubenswrapper[4823]: I0126 17:06:48.085606 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:48 crc kubenswrapper[4823]: I0126 17:06:48.086508 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:48 crc kubenswrapper[4823]: I0126 17:06:48.810378 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-5f6c667744-wvxxk" podUID="6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.48:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:06:49 crc kubenswrapper[4823]: I0126 17:06:49.126318 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6brsc" podUID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" containerName="registry-server" probeResult="failure" output=< Jan 26 17:06:49 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 17:06:49 crc kubenswrapper[4823]: > Jan 26 17:06:58 crc kubenswrapper[4823]: I0126 17:06:58.135839 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:58 crc kubenswrapper[4823]: I0126 17:06:58.185922 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:06:58 crc kubenswrapper[4823]: I0126 17:06:58.382654 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6brsc"] Jan 26 17:06:59 crc kubenswrapper[4823]: I0126 17:06:59.615268 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6brsc" podUID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" containerName="registry-server" containerID="cri-o://cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2" gracePeriod=2 Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.034203 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.109849 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5f1d19-54dd-4d15-83ee-c21178dfb606-catalog-content\") pod \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\" (UID: \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\") " Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.109973 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-566nw\" (UniqueName: \"kubernetes.io/projected/fc5f1d19-54dd-4d15-83ee-c21178dfb606-kube-api-access-566nw\") pod \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\" (UID: \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\") " Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.110301 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5f1d19-54dd-4d15-83ee-c21178dfb606-utilities\") pod \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\" (UID: \"fc5f1d19-54dd-4d15-83ee-c21178dfb606\") " Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.111819 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc5f1d19-54dd-4d15-83ee-c21178dfb606-utilities" (OuterVolumeSpecName: "utilities") pod "fc5f1d19-54dd-4d15-83ee-c21178dfb606" (UID: "fc5f1d19-54dd-4d15-83ee-c21178dfb606"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.118791 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc5f1d19-54dd-4d15-83ee-c21178dfb606-kube-api-access-566nw" (OuterVolumeSpecName: "kube-api-access-566nw") pod "fc5f1d19-54dd-4d15-83ee-c21178dfb606" (UID: "fc5f1d19-54dd-4d15-83ee-c21178dfb606"). InnerVolumeSpecName "kube-api-access-566nw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.212305 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-566nw\" (UniqueName: \"kubernetes.io/projected/fc5f1d19-54dd-4d15-83ee-c21178dfb606-kube-api-access-566nw\") on node \"crc\" DevicePath \"\"" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.212343 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc5f1d19-54dd-4d15-83ee-c21178dfb606-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.233434 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc5f1d19-54dd-4d15-83ee-c21178dfb606-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc5f1d19-54dd-4d15-83ee-c21178dfb606" (UID: "fc5f1d19-54dd-4d15-83ee-c21178dfb606"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.314620 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc5f1d19-54dd-4d15-83ee-c21178dfb606-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.632256 4823 generic.go:334] "Generic (PLEG): container finished" podID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" containerID="cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2" exitCode=0 Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.632311 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6brsc" event={"ID":"fc5f1d19-54dd-4d15-83ee-c21178dfb606","Type":"ContainerDied","Data":"cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2"} Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.632396 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6brsc" event={"ID":"fc5f1d19-54dd-4d15-83ee-c21178dfb606","Type":"ContainerDied","Data":"3a30014f2d66a61f2d66f59a266f8fe1789b2725307e9043edf315caac5c98fb"} Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.632423 4823 scope.go:117] "RemoveContainer" containerID="cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.632417 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6brsc" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.653150 4823 scope.go:117] "RemoveContainer" containerID="0a18be2af2a777971fa5c9095ce411e81b9616818ce057cfa6b85ca6433eed5d" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.678175 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6brsc"] Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.685884 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6brsc"] Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.733454 4823 scope.go:117] "RemoveContainer" containerID="7e2cb8a02836a532ce9a9e80e6e691c56e4c0e78f5c5c2cd5a0f2bb8a2ca7da2" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.752876 4823 scope.go:117] "RemoveContainer" containerID="cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2" Jan 26 17:07:00 crc kubenswrapper[4823]: E0126 17:07:00.753681 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2\": container with ID starting with cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2 not found: ID does not exist" containerID="cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.753732 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2"} err="failed to get container status \"cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2\": rpc error: code = NotFound desc = could not find container \"cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2\": container with ID starting with cbad96e78a2410297c9dd66fca1ae68685688a73bc50a5fe66d12d02d2b6c2c2 not found: ID does not exist" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.753766 4823 scope.go:117] "RemoveContainer" containerID="0a18be2af2a777971fa5c9095ce411e81b9616818ce057cfa6b85ca6433eed5d" Jan 26 17:07:00 crc kubenswrapper[4823]: E0126 17:07:00.754199 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a18be2af2a777971fa5c9095ce411e81b9616818ce057cfa6b85ca6433eed5d\": container with ID starting with 0a18be2af2a777971fa5c9095ce411e81b9616818ce057cfa6b85ca6433eed5d not found: ID does not exist" containerID="0a18be2af2a777971fa5c9095ce411e81b9616818ce057cfa6b85ca6433eed5d" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.754270 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a18be2af2a777971fa5c9095ce411e81b9616818ce057cfa6b85ca6433eed5d"} err="failed to get container status \"0a18be2af2a777971fa5c9095ce411e81b9616818ce057cfa6b85ca6433eed5d\": rpc error: code = NotFound desc = could not find container \"0a18be2af2a777971fa5c9095ce411e81b9616818ce057cfa6b85ca6433eed5d\": container with ID starting with 0a18be2af2a777971fa5c9095ce411e81b9616818ce057cfa6b85ca6433eed5d not found: ID does not exist" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.754316 4823 scope.go:117] "RemoveContainer" containerID="7e2cb8a02836a532ce9a9e80e6e691c56e4c0e78f5c5c2cd5a0f2bb8a2ca7da2" Jan 26 17:07:00 crc kubenswrapper[4823]: E0126 17:07:00.754672 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e2cb8a02836a532ce9a9e80e6e691c56e4c0e78f5c5c2cd5a0f2bb8a2ca7da2\": container with ID starting with 7e2cb8a02836a532ce9a9e80e6e691c56e4c0e78f5c5c2cd5a0f2bb8a2ca7da2 not found: ID does not exist" containerID="7e2cb8a02836a532ce9a9e80e6e691c56e4c0e78f5c5c2cd5a0f2bb8a2ca7da2" Jan 26 17:07:00 crc kubenswrapper[4823]: I0126 17:07:00.754702 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e2cb8a02836a532ce9a9e80e6e691c56e4c0e78f5c5c2cd5a0f2bb8a2ca7da2"} err="failed to get container status \"7e2cb8a02836a532ce9a9e80e6e691c56e4c0e78f5c5c2cd5a0f2bb8a2ca7da2\": rpc error: code = NotFound desc = could not find container \"7e2cb8a02836a532ce9a9e80e6e691c56e4c0e78f5c5c2cd5a0f2bb8a2ca7da2\": container with ID starting with 7e2cb8a02836a532ce9a9e80e6e691c56e4c0e78f5c5c2cd5a0f2bb8a2ca7da2 not found: ID does not exist" Jan 26 17:07:01 crc kubenswrapper[4823]: I0126 17:07:01.574132 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" path="/var/lib/kubelet/pods/fc5f1d19-54dd-4d15-83ee-c21178dfb606/volumes" Jan 26 17:08:04 crc kubenswrapper[4823]: I0126 17:08:04.508082 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:08:04 crc kubenswrapper[4823]: I0126 17:08:04.508670 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:08:34 crc kubenswrapper[4823]: I0126 17:08:34.508816 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:08:34 crc kubenswrapper[4823]: I0126 17:08:34.509864 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:08:50 crc kubenswrapper[4823]: I0126 17:08:50.949898 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v8676"] Jan 26 17:08:50 crc kubenswrapper[4823]: E0126 17:08:50.950723 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257403af-4e67-401d-ac09-d218f6f434cc" containerName="registry-server" Jan 26 17:08:50 crc kubenswrapper[4823]: I0126 17:08:50.950738 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="257403af-4e67-401d-ac09-d218f6f434cc" containerName="registry-server" Jan 26 17:08:50 crc kubenswrapper[4823]: E0126 17:08:50.950753 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257403af-4e67-401d-ac09-d218f6f434cc" containerName="extract-content" Jan 26 17:08:50 crc kubenswrapper[4823]: I0126 17:08:50.950759 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="257403af-4e67-401d-ac09-d218f6f434cc" containerName="extract-content" Jan 26 17:08:50 crc kubenswrapper[4823]: E0126 17:08:50.950775 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="257403af-4e67-401d-ac09-d218f6f434cc" containerName="extract-utilities" Jan 26 17:08:50 crc kubenswrapper[4823]: I0126 17:08:50.950783 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="257403af-4e67-401d-ac09-d218f6f434cc" containerName="extract-utilities" Jan 26 17:08:50 crc kubenswrapper[4823]: E0126 17:08:50.950794 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" containerName="extract-utilities" Jan 26 17:08:50 crc kubenswrapper[4823]: I0126 17:08:50.950800 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" containerName="extract-utilities" Jan 26 17:08:50 crc kubenswrapper[4823]: E0126 17:08:50.950826 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" containerName="extract-content" Jan 26 17:08:50 crc kubenswrapper[4823]: I0126 17:08:50.950831 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" containerName="extract-content" Jan 26 17:08:50 crc kubenswrapper[4823]: E0126 17:08:50.950843 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" containerName="registry-server" Jan 26 17:08:50 crc kubenswrapper[4823]: I0126 17:08:50.950849 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" containerName="registry-server" Jan 26 17:08:50 crc kubenswrapper[4823]: I0126 17:08:50.951034 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="257403af-4e67-401d-ac09-d218f6f434cc" containerName="registry-server" Jan 26 17:08:50 crc kubenswrapper[4823]: I0126 17:08:50.951044 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc5f1d19-54dd-4d15-83ee-c21178dfb606" containerName="registry-server" Jan 26 17:08:50 crc kubenswrapper[4823]: I0126 17:08:50.952391 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:08:50 crc kubenswrapper[4823]: I0126 17:08:50.961997 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8676"] Jan 26 17:08:51 crc kubenswrapper[4823]: I0126 17:08:51.123582 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5205f998-5201-4e3b-bb0a-eb744eb50637-catalog-content\") pod \"redhat-marketplace-v8676\" (UID: \"5205f998-5201-4e3b-bb0a-eb744eb50637\") " pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:08:51 crc kubenswrapper[4823]: I0126 17:08:51.123675 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9jm4\" (UniqueName: \"kubernetes.io/projected/5205f998-5201-4e3b-bb0a-eb744eb50637-kube-api-access-n9jm4\") pod \"redhat-marketplace-v8676\" (UID: \"5205f998-5201-4e3b-bb0a-eb744eb50637\") " pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:08:51 crc kubenswrapper[4823]: I0126 17:08:51.123742 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5205f998-5201-4e3b-bb0a-eb744eb50637-utilities\") pod \"redhat-marketplace-v8676\" (UID: \"5205f998-5201-4e3b-bb0a-eb744eb50637\") " pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:08:51 crc kubenswrapper[4823]: I0126 17:08:51.225269 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5205f998-5201-4e3b-bb0a-eb744eb50637-catalog-content\") pod \"redhat-marketplace-v8676\" (UID: \"5205f998-5201-4e3b-bb0a-eb744eb50637\") " pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:08:51 crc kubenswrapper[4823]: I0126 17:08:51.225698 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9jm4\" (UniqueName: \"kubernetes.io/projected/5205f998-5201-4e3b-bb0a-eb744eb50637-kube-api-access-n9jm4\") pod \"redhat-marketplace-v8676\" (UID: \"5205f998-5201-4e3b-bb0a-eb744eb50637\") " pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:08:51 crc kubenswrapper[4823]: I0126 17:08:51.225848 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5205f998-5201-4e3b-bb0a-eb744eb50637-catalog-content\") pod \"redhat-marketplace-v8676\" (UID: \"5205f998-5201-4e3b-bb0a-eb744eb50637\") " pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:08:51 crc kubenswrapper[4823]: I0126 17:08:51.225951 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5205f998-5201-4e3b-bb0a-eb744eb50637-utilities\") pod \"redhat-marketplace-v8676\" (UID: \"5205f998-5201-4e3b-bb0a-eb744eb50637\") " pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:08:51 crc kubenswrapper[4823]: I0126 17:08:51.226210 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5205f998-5201-4e3b-bb0a-eb744eb50637-utilities\") pod \"redhat-marketplace-v8676\" (UID: \"5205f998-5201-4e3b-bb0a-eb744eb50637\") " pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:08:51 crc kubenswrapper[4823]: I0126 17:08:51.259944 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9jm4\" (UniqueName: \"kubernetes.io/projected/5205f998-5201-4e3b-bb0a-eb744eb50637-kube-api-access-n9jm4\") pod \"redhat-marketplace-v8676\" (UID: \"5205f998-5201-4e3b-bb0a-eb744eb50637\") " pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:08:51 crc kubenswrapper[4823]: I0126 17:08:51.285156 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:08:51 crc kubenswrapper[4823]: I0126 17:08:51.825886 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8676"] Jan 26 17:08:51 crc kubenswrapper[4823]: I0126 17:08:51.978331 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8676" event={"ID":"5205f998-5201-4e3b-bb0a-eb744eb50637","Type":"ContainerStarted","Data":"a672fcbeded84f82d37c8557f4074712ed91b07a6ec20efbf3e04b01c5f390a0"} Jan 26 17:08:52 crc kubenswrapper[4823]: I0126 17:08:52.987214 4823 generic.go:334] "Generic (PLEG): container finished" podID="5205f998-5201-4e3b-bb0a-eb744eb50637" containerID="245158c377bf9824342e06f47f2234c2782667b580485f88413d97ac24a20b43" exitCode=0 Jan 26 17:08:52 crc kubenswrapper[4823]: I0126 17:08:52.987272 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8676" event={"ID":"5205f998-5201-4e3b-bb0a-eb744eb50637","Type":"ContainerDied","Data":"245158c377bf9824342e06f47f2234c2782667b580485f88413d97ac24a20b43"} Jan 26 17:09:01 crc kubenswrapper[4823]: I0126 17:09:01.059141 4823 generic.go:334] "Generic (PLEG): container finished" podID="5205f998-5201-4e3b-bb0a-eb744eb50637" containerID="4002dce8dc07c98162171c374186b4c2d93bda14ed6fb69ef78ff78112031351" exitCode=0 Jan 26 17:09:01 crc kubenswrapper[4823]: I0126 17:09:01.059181 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8676" event={"ID":"5205f998-5201-4e3b-bb0a-eb744eb50637","Type":"ContainerDied","Data":"4002dce8dc07c98162171c374186b4c2d93bda14ed6fb69ef78ff78112031351"} Jan 26 17:09:04 crc kubenswrapper[4823]: I0126 17:09:04.466691 4823 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-mrlhr" podUID="16294fad-09f5-4781-83d7-82b25d1bc644" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 26 17:09:04 crc kubenswrapper[4823]: I0126 17:09:04.508269 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:09:04 crc kubenswrapper[4823]: I0126 17:09:04.508349 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:09:04 crc kubenswrapper[4823]: I0126 17:09:04.508474 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 17:09:04 crc kubenswrapper[4823]: I0126 17:09:04.509619 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:09:04 crc kubenswrapper[4823]: I0126 17:09:04.509716 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" gracePeriod=600 Jan 26 17:09:06 crc kubenswrapper[4823]: I0126 17:09:06.108702 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" exitCode=0 Jan 26 17:09:06 crc kubenswrapper[4823]: I0126 17:09:06.108799 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342"} Jan 26 17:09:06 crc kubenswrapper[4823]: I0126 17:09:06.109924 4823 scope.go:117] "RemoveContainer" containerID="460623c2f67aa3511e36c2bd5c852968a1a86a4029b520e99345aca2995320ec" Jan 26 17:09:08 crc kubenswrapper[4823]: E0126 17:09:08.459452 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:09:09 crc kubenswrapper[4823]: I0126 17:09:09.136810 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:09:09 crc kubenswrapper[4823]: E0126 17:09:09.137153 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:09:10 crc kubenswrapper[4823]: I0126 17:09:10.148771 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v8676" event={"ID":"5205f998-5201-4e3b-bb0a-eb744eb50637","Type":"ContainerStarted","Data":"b98b886ed7d408b49c6e57db6d8fa73ddeae7cbf2d9a36c1e6fb18040152769c"} Jan 26 17:09:10 crc kubenswrapper[4823]: I0126 17:09:10.172527 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v8676" podStartSLOduration=3.535064276 podStartE2EDuration="20.172505919s" podCreationTimestamp="2026-01-26 17:08:50 +0000 UTC" firstStartedPulling="2026-01-26 17:08:52.989606025 +0000 UTC m=+8529.675069130" lastFinishedPulling="2026-01-26 17:09:09.627047668 +0000 UTC m=+8546.312510773" observedRunningTime="2026-01-26 17:09:10.16414207 +0000 UTC m=+8546.849605165" watchObservedRunningTime="2026-01-26 17:09:10.172505919 +0000 UTC m=+8546.857969024" Jan 26 17:09:11 crc kubenswrapper[4823]: I0126 17:09:11.287450 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:09:11 crc kubenswrapper[4823]: I0126 17:09:11.287519 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:09:12 crc kubenswrapper[4823]: I0126 17:09:12.393565 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-v8676" podUID="5205f998-5201-4e3b-bb0a-eb744eb50637" containerName="registry-server" probeResult="failure" output=< Jan 26 17:09:12 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 17:09:12 crc kubenswrapper[4823]: > Jan 26 17:09:21 crc kubenswrapper[4823]: I0126 17:09:21.333463 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:09:21 crc kubenswrapper[4823]: I0126 17:09:21.386101 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v8676" Jan 26 17:09:21 crc kubenswrapper[4823]: I0126 17:09:21.980973 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v8676"] Jan 26 17:09:22 crc kubenswrapper[4823]: I0126 17:09:22.167093 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhwd7"] Jan 26 17:09:22 crc kubenswrapper[4823]: I0126 17:09:22.175219 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bhwd7" podUID="39fa4549-6e37-47f2-b9c6-bc874636ff40" containerName="registry-server" containerID="cri-o://458d36e5b546e28adc50e646db10b9509e5d476dc59caf0b61bf92601ebf5944" gracePeriod=2 Jan 26 17:09:22 crc kubenswrapper[4823]: I0126 17:09:22.561189 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:09:22 crc kubenswrapper[4823]: E0126 17:09:22.561929 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:09:23 crc kubenswrapper[4823]: I0126 17:09:23.300089 4823 generic.go:334] "Generic (PLEG): container finished" podID="39fa4549-6e37-47f2-b9c6-bc874636ff40" containerID="458d36e5b546e28adc50e646db10b9509e5d476dc59caf0b61bf92601ebf5944" exitCode=0 Jan 26 17:09:23 crc kubenswrapper[4823]: I0126 17:09:23.300167 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhwd7" event={"ID":"39fa4549-6e37-47f2-b9c6-bc874636ff40","Type":"ContainerDied","Data":"458d36e5b546e28adc50e646db10b9509e5d476dc59caf0b61bf92601ebf5944"} Jan 26 17:09:23 crc kubenswrapper[4823]: I0126 17:09:23.380343 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 17:09:23 crc kubenswrapper[4823]: I0126 17:09:23.526310 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39fa4549-6e37-47f2-b9c6-bc874636ff40-catalog-content\") pod \"39fa4549-6e37-47f2-b9c6-bc874636ff40\" (UID: \"39fa4549-6e37-47f2-b9c6-bc874636ff40\") " Jan 26 17:09:23 crc kubenswrapper[4823]: I0126 17:09:23.526507 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39fa4549-6e37-47f2-b9c6-bc874636ff40-utilities\") pod \"39fa4549-6e37-47f2-b9c6-bc874636ff40\" (UID: \"39fa4549-6e37-47f2-b9c6-bc874636ff40\") " Jan 26 17:09:23 crc kubenswrapper[4823]: I0126 17:09:23.526616 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dddz8\" (UniqueName: \"kubernetes.io/projected/39fa4549-6e37-47f2-b9c6-bc874636ff40-kube-api-access-dddz8\") pod \"39fa4549-6e37-47f2-b9c6-bc874636ff40\" (UID: \"39fa4549-6e37-47f2-b9c6-bc874636ff40\") " Jan 26 17:09:23 crc kubenswrapper[4823]: I0126 17:09:23.529535 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39fa4549-6e37-47f2-b9c6-bc874636ff40-utilities" (OuterVolumeSpecName: "utilities") pod "39fa4549-6e37-47f2-b9c6-bc874636ff40" (UID: "39fa4549-6e37-47f2-b9c6-bc874636ff40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:09:23 crc kubenswrapper[4823]: I0126 17:09:23.534281 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39fa4549-6e37-47f2-b9c6-bc874636ff40-kube-api-access-dddz8" (OuterVolumeSpecName: "kube-api-access-dddz8") pod "39fa4549-6e37-47f2-b9c6-bc874636ff40" (UID: "39fa4549-6e37-47f2-b9c6-bc874636ff40"). InnerVolumeSpecName "kube-api-access-dddz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:09:23 crc kubenswrapper[4823]: I0126 17:09:23.593204 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39fa4549-6e37-47f2-b9c6-bc874636ff40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39fa4549-6e37-47f2-b9c6-bc874636ff40" (UID: "39fa4549-6e37-47f2-b9c6-bc874636ff40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:09:23 crc kubenswrapper[4823]: I0126 17:09:23.629108 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39fa4549-6e37-47f2-b9c6-bc874636ff40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:23 crc kubenswrapper[4823]: I0126 17:09:23.629153 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39fa4549-6e37-47f2-b9c6-bc874636ff40-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:23 crc kubenswrapper[4823]: I0126 17:09:23.629163 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dddz8\" (UniqueName: \"kubernetes.io/projected/39fa4549-6e37-47f2-b9c6-bc874636ff40-kube-api-access-dddz8\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:24 crc kubenswrapper[4823]: I0126 17:09:24.312163 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bhwd7" event={"ID":"39fa4549-6e37-47f2-b9c6-bc874636ff40","Type":"ContainerDied","Data":"f525be1a4a8eda4915ec814b4e5dbe90bee5da3de47ae22a21dd6f43cdaa3883"} Jan 26 17:09:24 crc kubenswrapper[4823]: I0126 17:09:24.312197 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bhwd7" Jan 26 17:09:24 crc kubenswrapper[4823]: I0126 17:09:24.312590 4823 scope.go:117] "RemoveContainer" containerID="458d36e5b546e28adc50e646db10b9509e5d476dc59caf0b61bf92601ebf5944" Jan 26 17:09:24 crc kubenswrapper[4823]: I0126 17:09:24.346101 4823 scope.go:117] "RemoveContainer" containerID="ab84fc287342497186b89bb89ce4684cc8bf029f7f63285114623f1ea16dd579" Jan 26 17:09:24 crc kubenswrapper[4823]: I0126 17:09:24.346949 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhwd7"] Jan 26 17:09:24 crc kubenswrapper[4823]: I0126 17:09:24.355736 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bhwd7"] Jan 26 17:09:24 crc kubenswrapper[4823]: I0126 17:09:24.391871 4823 scope.go:117] "RemoveContainer" containerID="a742e9569338741bdd0336ecbb28adc26bd5adc3555ee2753a58ed392f130721" Jan 26 17:09:25 crc kubenswrapper[4823]: I0126 17:09:25.598385 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39fa4549-6e37-47f2-b9c6-bc874636ff40" path="/var/lib/kubelet/pods/39fa4549-6e37-47f2-b9c6-bc874636ff40/volumes" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.163057 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c2mjq"] Jan 26 17:09:29 crc kubenswrapper[4823]: E0126 17:09:29.163824 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39fa4549-6e37-47f2-b9c6-bc874636ff40" containerName="registry-server" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.163842 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="39fa4549-6e37-47f2-b9c6-bc874636ff40" containerName="registry-server" Jan 26 17:09:29 crc kubenswrapper[4823]: E0126 17:09:29.163854 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39fa4549-6e37-47f2-b9c6-bc874636ff40" containerName="extract-content" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.163860 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="39fa4549-6e37-47f2-b9c6-bc874636ff40" containerName="extract-content" Jan 26 17:09:29 crc kubenswrapper[4823]: E0126 17:09:29.163867 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39fa4549-6e37-47f2-b9c6-bc874636ff40" containerName="extract-utilities" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.163874 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="39fa4549-6e37-47f2-b9c6-bc874636ff40" containerName="extract-utilities" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.164087 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="39fa4549-6e37-47f2-b9c6-bc874636ff40" containerName="registry-server" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.165600 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.189073 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2mjq"] Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.343148 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn9wq\" (UniqueName: \"kubernetes.io/projected/3f629ad4-625a-4167-8d77-8cfe058b11ad-kube-api-access-pn9wq\") pod \"community-operators-c2mjq\" (UID: \"3f629ad4-625a-4167-8d77-8cfe058b11ad\") " pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.343214 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f629ad4-625a-4167-8d77-8cfe058b11ad-catalog-content\") pod \"community-operators-c2mjq\" (UID: \"3f629ad4-625a-4167-8d77-8cfe058b11ad\") " pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.343240 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f629ad4-625a-4167-8d77-8cfe058b11ad-utilities\") pod \"community-operators-c2mjq\" (UID: \"3f629ad4-625a-4167-8d77-8cfe058b11ad\") " pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.445571 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f629ad4-625a-4167-8d77-8cfe058b11ad-utilities\") pod \"community-operators-c2mjq\" (UID: \"3f629ad4-625a-4167-8d77-8cfe058b11ad\") " pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.446064 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn9wq\" (UniqueName: \"kubernetes.io/projected/3f629ad4-625a-4167-8d77-8cfe058b11ad-kube-api-access-pn9wq\") pod \"community-operators-c2mjq\" (UID: \"3f629ad4-625a-4167-8d77-8cfe058b11ad\") " pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.446097 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f629ad4-625a-4167-8d77-8cfe058b11ad-catalog-content\") pod \"community-operators-c2mjq\" (UID: \"3f629ad4-625a-4167-8d77-8cfe058b11ad\") " pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.446162 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f629ad4-625a-4167-8d77-8cfe058b11ad-utilities\") pod \"community-operators-c2mjq\" (UID: \"3f629ad4-625a-4167-8d77-8cfe058b11ad\") " pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.446486 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f629ad4-625a-4167-8d77-8cfe058b11ad-catalog-content\") pod \"community-operators-c2mjq\" (UID: \"3f629ad4-625a-4167-8d77-8cfe058b11ad\") " pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.468829 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn9wq\" (UniqueName: \"kubernetes.io/projected/3f629ad4-625a-4167-8d77-8cfe058b11ad-kube-api-access-pn9wq\") pod \"community-operators-c2mjq\" (UID: \"3f629ad4-625a-4167-8d77-8cfe058b11ad\") " pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:29 crc kubenswrapper[4823]: I0126 17:09:29.505109 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:30 crc kubenswrapper[4823]: I0126 17:09:30.079953 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2mjq"] Jan 26 17:09:30 crc kubenswrapper[4823]: W0126 17:09:30.101744 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f629ad4_625a_4167_8d77_8cfe058b11ad.slice/crio-99f89872e490c33b9b0a3f103eaececea891247ba4e262e9290d96eaf351ccb8 WatchSource:0}: Error finding container 99f89872e490c33b9b0a3f103eaececea891247ba4e262e9290d96eaf351ccb8: Status 404 returned error can't find the container with id 99f89872e490c33b9b0a3f103eaececea891247ba4e262e9290d96eaf351ccb8 Jan 26 17:09:30 crc kubenswrapper[4823]: I0126 17:09:30.375769 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2mjq" event={"ID":"3f629ad4-625a-4167-8d77-8cfe058b11ad","Type":"ContainerStarted","Data":"99f89872e490c33b9b0a3f103eaececea891247ba4e262e9290d96eaf351ccb8"} Jan 26 17:09:31 crc kubenswrapper[4823]: I0126 17:09:31.386008 4823 generic.go:334] "Generic (PLEG): container finished" podID="3f629ad4-625a-4167-8d77-8cfe058b11ad" containerID="fac5788dcf8de9bb200ca3e085c7ab96f37dcb9ba27f7edb8fc3b7d9400efde5" exitCode=0 Jan 26 17:09:31 crc kubenswrapper[4823]: I0126 17:09:31.386056 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2mjq" event={"ID":"3f629ad4-625a-4167-8d77-8cfe058b11ad","Type":"ContainerDied","Data":"fac5788dcf8de9bb200ca3e085c7ab96f37dcb9ba27f7edb8fc3b7d9400efde5"} Jan 26 17:09:33 crc kubenswrapper[4823]: I0126 17:09:33.410638 4823 generic.go:334] "Generic (PLEG): container finished" podID="3f629ad4-625a-4167-8d77-8cfe058b11ad" containerID="6527125b8d6b427eda3ca1ca140d256f046465ba6c44a3a44dcf7bd854eeddb9" exitCode=0 Jan 26 17:09:33 crc kubenswrapper[4823]: I0126 17:09:33.410766 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2mjq" event={"ID":"3f629ad4-625a-4167-8d77-8cfe058b11ad","Type":"ContainerDied","Data":"6527125b8d6b427eda3ca1ca140d256f046465ba6c44a3a44dcf7bd854eeddb9"} Jan 26 17:09:36 crc kubenswrapper[4823]: I0126 17:09:36.435166 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2mjq" event={"ID":"3f629ad4-625a-4167-8d77-8cfe058b11ad","Type":"ContainerStarted","Data":"5ad751221908ca9db321a52bac43cf9f4c33e48f8fa0f5efae7ee5bb710383d5"} Jan 26 17:09:36 crc kubenswrapper[4823]: I0126 17:09:36.458139 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c2mjq" podStartSLOduration=5.018768161 podStartE2EDuration="7.458118449s" podCreationTimestamp="2026-01-26 17:09:29 +0000 UTC" firstStartedPulling="2026-01-26 17:09:31.388101575 +0000 UTC m=+8568.073564670" lastFinishedPulling="2026-01-26 17:09:33.827451843 +0000 UTC m=+8570.512914958" observedRunningTime="2026-01-26 17:09:36.451688753 +0000 UTC m=+8573.137151858" watchObservedRunningTime="2026-01-26 17:09:36.458118449 +0000 UTC m=+8573.143581554" Jan 26 17:09:36 crc kubenswrapper[4823]: I0126 17:09:36.560385 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:09:36 crc kubenswrapper[4823]: E0126 17:09:36.560716 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:09:39 crc kubenswrapper[4823]: I0126 17:09:39.506036 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:39 crc kubenswrapper[4823]: I0126 17:09:39.506397 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:39 crc kubenswrapper[4823]: I0126 17:09:39.580955 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:40 crc kubenswrapper[4823]: I0126 17:09:40.526068 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:41 crc kubenswrapper[4823]: I0126 17:09:41.342854 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2mjq"] Jan 26 17:09:42 crc kubenswrapper[4823]: I0126 17:09:42.496195 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c2mjq" podUID="3f629ad4-625a-4167-8d77-8cfe058b11ad" containerName="registry-server" containerID="cri-o://5ad751221908ca9db321a52bac43cf9f4c33e48f8fa0f5efae7ee5bb710383d5" gracePeriod=2 Jan 26 17:09:45 crc kubenswrapper[4823]: I0126 17:09:45.536336 4823 generic.go:334] "Generic (PLEG): container finished" podID="3f629ad4-625a-4167-8d77-8cfe058b11ad" containerID="5ad751221908ca9db321a52bac43cf9f4c33e48f8fa0f5efae7ee5bb710383d5" exitCode=0 Jan 26 17:09:45 crc kubenswrapper[4823]: I0126 17:09:45.536503 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2mjq" event={"ID":"3f629ad4-625a-4167-8d77-8cfe058b11ad","Type":"ContainerDied","Data":"5ad751221908ca9db321a52bac43cf9f4c33e48f8fa0f5efae7ee5bb710383d5"} Jan 26 17:09:45 crc kubenswrapper[4823]: I0126 17:09:45.695259 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:45 crc kubenswrapper[4823]: I0126 17:09:45.886505 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f629ad4-625a-4167-8d77-8cfe058b11ad-catalog-content\") pod \"3f629ad4-625a-4167-8d77-8cfe058b11ad\" (UID: \"3f629ad4-625a-4167-8d77-8cfe058b11ad\") " Jan 26 17:09:45 crc kubenswrapper[4823]: I0126 17:09:45.886620 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn9wq\" (UniqueName: \"kubernetes.io/projected/3f629ad4-625a-4167-8d77-8cfe058b11ad-kube-api-access-pn9wq\") pod \"3f629ad4-625a-4167-8d77-8cfe058b11ad\" (UID: \"3f629ad4-625a-4167-8d77-8cfe058b11ad\") " Jan 26 17:09:45 crc kubenswrapper[4823]: I0126 17:09:45.886763 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f629ad4-625a-4167-8d77-8cfe058b11ad-utilities\") pod \"3f629ad4-625a-4167-8d77-8cfe058b11ad\" (UID: \"3f629ad4-625a-4167-8d77-8cfe058b11ad\") " Jan 26 17:09:45 crc kubenswrapper[4823]: I0126 17:09:45.887860 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f629ad4-625a-4167-8d77-8cfe058b11ad-utilities" (OuterVolumeSpecName: "utilities") pod "3f629ad4-625a-4167-8d77-8cfe058b11ad" (UID: "3f629ad4-625a-4167-8d77-8cfe058b11ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:09:45 crc kubenswrapper[4823]: I0126 17:09:45.893785 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f629ad4-625a-4167-8d77-8cfe058b11ad-kube-api-access-pn9wq" (OuterVolumeSpecName: "kube-api-access-pn9wq") pod "3f629ad4-625a-4167-8d77-8cfe058b11ad" (UID: "3f629ad4-625a-4167-8d77-8cfe058b11ad"). InnerVolumeSpecName "kube-api-access-pn9wq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:09:45 crc kubenswrapper[4823]: I0126 17:09:45.940251 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f629ad4-625a-4167-8d77-8cfe058b11ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f629ad4-625a-4167-8d77-8cfe058b11ad" (UID: "3f629ad4-625a-4167-8d77-8cfe058b11ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:09:45 crc kubenswrapper[4823]: I0126 17:09:45.989249 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f629ad4-625a-4167-8d77-8cfe058b11ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:45 crc kubenswrapper[4823]: I0126 17:09:45.989288 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn9wq\" (UniqueName: \"kubernetes.io/projected/3f629ad4-625a-4167-8d77-8cfe058b11ad-kube-api-access-pn9wq\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:45 crc kubenswrapper[4823]: I0126 17:09:45.989301 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f629ad4-625a-4167-8d77-8cfe058b11ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:09:46 crc kubenswrapper[4823]: I0126 17:09:46.548750 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2mjq" event={"ID":"3f629ad4-625a-4167-8d77-8cfe058b11ad","Type":"ContainerDied","Data":"99f89872e490c33b9b0a3f103eaececea891247ba4e262e9290d96eaf351ccb8"} Jan 26 17:09:46 crc kubenswrapper[4823]: I0126 17:09:46.548852 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2mjq" Jan 26 17:09:46 crc kubenswrapper[4823]: I0126 17:09:46.549111 4823 scope.go:117] "RemoveContainer" containerID="5ad751221908ca9db321a52bac43cf9f4c33e48f8fa0f5efae7ee5bb710383d5" Jan 26 17:09:46 crc kubenswrapper[4823]: I0126 17:09:46.571233 4823 scope.go:117] "RemoveContainer" containerID="6527125b8d6b427eda3ca1ca140d256f046465ba6c44a3a44dcf7bd854eeddb9" Jan 26 17:09:46 crc kubenswrapper[4823]: I0126 17:09:46.597383 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2mjq"] Jan 26 17:09:46 crc kubenswrapper[4823]: I0126 17:09:46.607073 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c2mjq"] Jan 26 17:09:46 crc kubenswrapper[4823]: I0126 17:09:46.625532 4823 scope.go:117] "RemoveContainer" containerID="fac5788dcf8de9bb200ca3e085c7ab96f37dcb9ba27f7edb8fc3b7d9400efde5" Jan 26 17:09:47 crc kubenswrapper[4823]: I0126 17:09:47.571011 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f629ad4-625a-4167-8d77-8cfe058b11ad" path="/var/lib/kubelet/pods/3f629ad4-625a-4167-8d77-8cfe058b11ad/volumes" Jan 26 17:09:49 crc kubenswrapper[4823]: I0126 17:09:49.560394 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:09:49 crc kubenswrapper[4823]: E0126 17:09:49.560976 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:10:03 crc kubenswrapper[4823]: I0126 17:10:03.567869 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:10:03 crc kubenswrapper[4823]: E0126 17:10:03.568858 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:10:17 crc kubenswrapper[4823]: I0126 17:10:17.560926 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:10:17 crc kubenswrapper[4823]: E0126 17:10:17.561914 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:10:28 crc kubenswrapper[4823]: I0126 17:10:28.560110 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:10:28 crc kubenswrapper[4823]: E0126 17:10:28.561223 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:10:43 crc kubenswrapper[4823]: I0126 17:10:43.570938 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:10:43 crc kubenswrapper[4823]: E0126 17:10:43.571579 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:10:55 crc kubenswrapper[4823]: I0126 17:10:55.560764 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:10:55 crc kubenswrapper[4823]: E0126 17:10:55.561496 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:11:06 crc kubenswrapper[4823]: I0126 17:11:06.561017 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:11:06 crc kubenswrapper[4823]: E0126 17:11:06.563747 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:11:18 crc kubenswrapper[4823]: I0126 17:11:18.562594 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:11:18 crc kubenswrapper[4823]: E0126 17:11:18.563641 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:11:29 crc kubenswrapper[4823]: I0126 17:11:29.561409 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:11:29 crc kubenswrapper[4823]: E0126 17:11:29.562292 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:11:44 crc kubenswrapper[4823]: I0126 17:11:44.560193 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:11:44 crc kubenswrapper[4823]: E0126 17:11:44.561073 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:11:55 crc kubenswrapper[4823]: I0126 17:11:55.561038 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:11:55 crc kubenswrapper[4823]: E0126 17:11:55.561897 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:12:06 crc kubenswrapper[4823]: I0126 17:12:06.560394 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:12:06 crc kubenswrapper[4823]: E0126 17:12:06.561152 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:12:19 crc kubenswrapper[4823]: I0126 17:12:19.563427 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:12:19 crc kubenswrapper[4823]: E0126 17:12:19.564196 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:12:31 crc kubenswrapper[4823]: I0126 17:12:31.561398 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:12:31 crc kubenswrapper[4823]: E0126 17:12:31.562571 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:12:44 crc kubenswrapper[4823]: I0126 17:12:44.562253 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:12:44 crc kubenswrapper[4823]: E0126 17:12:44.563444 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:12:58 crc kubenswrapper[4823]: I0126 17:12:58.560447 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:12:58 crc kubenswrapper[4823]: E0126 17:12:58.561422 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:13:12 crc kubenswrapper[4823]: I0126 17:13:12.560022 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:13:12 crc kubenswrapper[4823]: E0126 17:13:12.560822 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:13:26 crc kubenswrapper[4823]: I0126 17:13:26.561086 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:13:26 crc kubenswrapper[4823]: E0126 17:13:26.561927 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:13:40 crc kubenswrapper[4823]: I0126 17:13:40.561190 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:13:40 crc kubenswrapper[4823]: E0126 17:13:40.561888 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:13:55 crc kubenswrapper[4823]: I0126 17:13:55.561220 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:13:55 crc kubenswrapper[4823]: E0126 17:13:55.562087 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:14:07 crc kubenswrapper[4823]: I0126 17:14:07.560085 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:14:08 crc kubenswrapper[4823]: I0126 17:14:08.753559 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"6b8e1cf30bb7fbe754bcbf2357ab788b171c49fd55aab1b6611084bd72258b62"} Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.149272 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws"] Jan 26 17:15:00 crc kubenswrapper[4823]: E0126 17:15:00.152580 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f629ad4-625a-4167-8d77-8cfe058b11ad" containerName="extract-utilities" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.152727 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f629ad4-625a-4167-8d77-8cfe058b11ad" containerName="extract-utilities" Jan 26 17:15:00 crc kubenswrapper[4823]: E0126 17:15:00.152824 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f629ad4-625a-4167-8d77-8cfe058b11ad" containerName="extract-content" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.152900 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f629ad4-625a-4167-8d77-8cfe058b11ad" containerName="extract-content" Jan 26 17:15:00 crc kubenswrapper[4823]: E0126 17:15:00.152987 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f629ad4-625a-4167-8d77-8cfe058b11ad" containerName="registry-server" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.153053 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f629ad4-625a-4167-8d77-8cfe058b11ad" containerName="registry-server" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.153379 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f629ad4-625a-4167-8d77-8cfe058b11ad" containerName="registry-server" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.154505 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.156596 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.157419 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.160082 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws"] Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.280974 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6bec74c-3a30-4a20-979c-94bb79dce9df-secret-volume\") pod \"collect-profiles-29490795-ps2ws\" (UID: \"b6bec74c-3a30-4a20-979c-94bb79dce9df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.281088 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6bec74c-3a30-4a20-979c-94bb79dce9df-config-volume\") pod \"collect-profiles-29490795-ps2ws\" (UID: \"b6bec74c-3a30-4a20-979c-94bb79dce9df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.281357 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvnwh\" (UniqueName: \"kubernetes.io/projected/b6bec74c-3a30-4a20-979c-94bb79dce9df-kube-api-access-xvnwh\") pod \"collect-profiles-29490795-ps2ws\" (UID: \"b6bec74c-3a30-4a20-979c-94bb79dce9df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.383026 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6bec74c-3a30-4a20-979c-94bb79dce9df-config-volume\") pod \"collect-profiles-29490795-ps2ws\" (UID: \"b6bec74c-3a30-4a20-979c-94bb79dce9df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.383110 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvnwh\" (UniqueName: \"kubernetes.io/projected/b6bec74c-3a30-4a20-979c-94bb79dce9df-kube-api-access-xvnwh\") pod \"collect-profiles-29490795-ps2ws\" (UID: \"b6bec74c-3a30-4a20-979c-94bb79dce9df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.383178 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6bec74c-3a30-4a20-979c-94bb79dce9df-secret-volume\") pod \"collect-profiles-29490795-ps2ws\" (UID: \"b6bec74c-3a30-4a20-979c-94bb79dce9df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.384112 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6bec74c-3a30-4a20-979c-94bb79dce9df-config-volume\") pod \"collect-profiles-29490795-ps2ws\" (UID: \"b6bec74c-3a30-4a20-979c-94bb79dce9df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.391314 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6bec74c-3a30-4a20-979c-94bb79dce9df-secret-volume\") pod \"collect-profiles-29490795-ps2ws\" (UID: \"b6bec74c-3a30-4a20-979c-94bb79dce9df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.408249 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvnwh\" (UniqueName: \"kubernetes.io/projected/b6bec74c-3a30-4a20-979c-94bb79dce9df-kube-api-access-xvnwh\") pod \"collect-profiles-29490795-ps2ws\" (UID: \"b6bec74c-3a30-4a20-979c-94bb79dce9df\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.472162 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:00 crc kubenswrapper[4823]: I0126 17:15:00.932077 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws"] Jan 26 17:15:01 crc kubenswrapper[4823]: I0126 17:15:01.225887 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" event={"ID":"b6bec74c-3a30-4a20-979c-94bb79dce9df","Type":"ContainerStarted","Data":"3ab00c003e7ff4e8ac0dd715e9d97a3ceba4362711cb9aa4cafcc522d9c13df1"} Jan 26 17:15:02 crc kubenswrapper[4823]: I0126 17:15:02.239689 4823 generic.go:334] "Generic (PLEG): container finished" podID="b6bec74c-3a30-4a20-979c-94bb79dce9df" containerID="cdb0bca254b7c6e66a5273a0c2219dc295a2efdf81ee0a543f4ae1cb5e9b59eb" exitCode=0 Jan 26 17:15:02 crc kubenswrapper[4823]: I0126 17:15:02.239824 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" event={"ID":"b6bec74c-3a30-4a20-979c-94bb79dce9df","Type":"ContainerDied","Data":"cdb0bca254b7c6e66a5273a0c2219dc295a2efdf81ee0a543f4ae1cb5e9b59eb"} Jan 26 17:15:03 crc kubenswrapper[4823]: I0126 17:15:03.591797 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:03 crc kubenswrapper[4823]: I0126 17:15:03.675901 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6bec74c-3a30-4a20-979c-94bb79dce9df-config-volume\") pod \"b6bec74c-3a30-4a20-979c-94bb79dce9df\" (UID: \"b6bec74c-3a30-4a20-979c-94bb79dce9df\") " Jan 26 17:15:03 crc kubenswrapper[4823]: I0126 17:15:03.675990 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6bec74c-3a30-4a20-979c-94bb79dce9df-secret-volume\") pod \"b6bec74c-3a30-4a20-979c-94bb79dce9df\" (UID: \"b6bec74c-3a30-4a20-979c-94bb79dce9df\") " Jan 26 17:15:03 crc kubenswrapper[4823]: I0126 17:15:03.676121 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvnwh\" (UniqueName: \"kubernetes.io/projected/b6bec74c-3a30-4a20-979c-94bb79dce9df-kube-api-access-xvnwh\") pod \"b6bec74c-3a30-4a20-979c-94bb79dce9df\" (UID: \"b6bec74c-3a30-4a20-979c-94bb79dce9df\") " Jan 26 17:15:03 crc kubenswrapper[4823]: I0126 17:15:03.676722 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6bec74c-3a30-4a20-979c-94bb79dce9df-config-volume" (OuterVolumeSpecName: "config-volume") pod "b6bec74c-3a30-4a20-979c-94bb79dce9df" (UID: "b6bec74c-3a30-4a20-979c-94bb79dce9df"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:15:03 crc kubenswrapper[4823]: I0126 17:15:03.685156 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6bec74c-3a30-4a20-979c-94bb79dce9df-kube-api-access-xvnwh" (OuterVolumeSpecName: "kube-api-access-xvnwh") pod "b6bec74c-3a30-4a20-979c-94bb79dce9df" (UID: "b6bec74c-3a30-4a20-979c-94bb79dce9df"). InnerVolumeSpecName "kube-api-access-xvnwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:15:03 crc kubenswrapper[4823]: I0126 17:15:03.685356 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bec74c-3a30-4a20-979c-94bb79dce9df-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b6bec74c-3a30-4a20-979c-94bb79dce9df" (UID: "b6bec74c-3a30-4a20-979c-94bb79dce9df"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:15:03 crc kubenswrapper[4823]: I0126 17:15:03.778064 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b6bec74c-3a30-4a20-979c-94bb79dce9df-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:03 crc kubenswrapper[4823]: I0126 17:15:03.778103 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b6bec74c-3a30-4a20-979c-94bb79dce9df-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:03 crc kubenswrapper[4823]: I0126 17:15:03.778115 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvnwh\" (UniqueName: \"kubernetes.io/projected/b6bec74c-3a30-4a20-979c-94bb79dce9df-kube-api-access-xvnwh\") on node \"crc\" DevicePath \"\"" Jan 26 17:15:04 crc kubenswrapper[4823]: I0126 17:15:04.257976 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" event={"ID":"b6bec74c-3a30-4a20-979c-94bb79dce9df","Type":"ContainerDied","Data":"3ab00c003e7ff4e8ac0dd715e9d97a3ceba4362711cb9aa4cafcc522d9c13df1"} Jan 26 17:15:04 crc kubenswrapper[4823]: I0126 17:15:04.258020 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ab00c003e7ff4e8ac0dd715e9d97a3ceba4362711cb9aa4cafcc522d9c13df1" Jan 26 17:15:04 crc kubenswrapper[4823]: I0126 17:15:04.258027 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490795-ps2ws" Jan 26 17:15:04 crc kubenswrapper[4823]: I0126 17:15:04.674938 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs"] Jan 26 17:15:04 crc kubenswrapper[4823]: I0126 17:15:04.683090 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490750-2d5zs"] Jan 26 17:15:05 crc kubenswrapper[4823]: I0126 17:15:05.574739 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1136075-30a5-40bc-918e-59c818a5d71f" path="/var/lib/kubelet/pods/e1136075-30a5-40bc-918e-59c818a5d71f/volumes" Jan 26 17:15:10 crc kubenswrapper[4823]: I0126 17:15:10.088486 4823 scope.go:117] "RemoveContainer" containerID="d7b85ff8844ac8a62f757be50a33d3a024f10640b922aff53364eeee41d20cb2" Jan 26 17:16:34 crc kubenswrapper[4823]: I0126 17:16:34.508219 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:16:34 crc kubenswrapper[4823]: I0126 17:16:34.508884 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:17:01 crc kubenswrapper[4823]: I0126 17:17:01.940997 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-p55v2"] Jan 26 17:17:01 crc kubenswrapper[4823]: E0126 17:17:01.942026 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6bec74c-3a30-4a20-979c-94bb79dce9df" containerName="collect-profiles" Jan 26 17:17:01 crc kubenswrapper[4823]: I0126 17:17:01.942041 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6bec74c-3a30-4a20-979c-94bb79dce9df" containerName="collect-profiles" Jan 26 17:17:01 crc kubenswrapper[4823]: I0126 17:17:01.942233 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6bec74c-3a30-4a20-979c-94bb79dce9df" containerName="collect-profiles" Jan 26 17:17:01 crc kubenswrapper[4823]: I0126 17:17:01.943711 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:01 crc kubenswrapper[4823]: I0126 17:17:01.950918 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p55v2"] Jan 26 17:17:02 crc kubenswrapper[4823]: I0126 17:17:02.087679 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-utilities\") pod \"redhat-operators-p55v2\" (UID: \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\") " pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:02 crc kubenswrapper[4823]: I0126 17:17:02.087741 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-catalog-content\") pod \"redhat-operators-p55v2\" (UID: \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\") " pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:02 crc kubenswrapper[4823]: I0126 17:17:02.088106 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmh5l\" (UniqueName: \"kubernetes.io/projected/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-kube-api-access-qmh5l\") pod \"redhat-operators-p55v2\" (UID: \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\") " pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:02 crc kubenswrapper[4823]: I0126 17:17:02.190402 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-utilities\") pod \"redhat-operators-p55v2\" (UID: \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\") " pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:02 crc kubenswrapper[4823]: I0126 17:17:02.190463 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-catalog-content\") pod \"redhat-operators-p55v2\" (UID: \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\") " pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:02 crc kubenswrapper[4823]: I0126 17:17:02.190580 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmh5l\" (UniqueName: \"kubernetes.io/projected/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-kube-api-access-qmh5l\") pod \"redhat-operators-p55v2\" (UID: \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\") " pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:02 crc kubenswrapper[4823]: I0126 17:17:02.190938 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-utilities\") pod \"redhat-operators-p55v2\" (UID: \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\") " pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:02 crc kubenswrapper[4823]: I0126 17:17:02.191295 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-catalog-content\") pod \"redhat-operators-p55v2\" (UID: \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\") " pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:02 crc kubenswrapper[4823]: I0126 17:17:02.217264 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmh5l\" (UniqueName: \"kubernetes.io/projected/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-kube-api-access-qmh5l\") pod \"redhat-operators-p55v2\" (UID: \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\") " pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:02 crc kubenswrapper[4823]: I0126 17:17:02.262121 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:02 crc kubenswrapper[4823]: I0126 17:17:02.729336 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-p55v2"] Jan 26 17:17:03 crc kubenswrapper[4823]: I0126 17:17:03.363135 4823 generic.go:334] "Generic (PLEG): container finished" podID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" containerID="451eeae451c1ee1acc9ea9923c0e589a0b13232f096f734bf111f825f09e5d9a" exitCode=0 Jan 26 17:17:03 crc kubenswrapper[4823]: I0126 17:17:03.363179 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p55v2" event={"ID":"592ec7f8-e80f-467c-be4c-80f4b7fabd3b","Type":"ContainerDied","Data":"451eeae451c1ee1acc9ea9923c0e589a0b13232f096f734bf111f825f09e5d9a"} Jan 26 17:17:03 crc kubenswrapper[4823]: I0126 17:17:03.363402 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p55v2" event={"ID":"592ec7f8-e80f-467c-be4c-80f4b7fabd3b","Type":"ContainerStarted","Data":"6ab5dd9b1a9c27447b4ddc96e77bcfa46cc5c78150abf39dd5a901d7584e0b9d"} Jan 26 17:17:03 crc kubenswrapper[4823]: I0126 17:17:03.365407 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:17:04 crc kubenswrapper[4823]: I0126 17:17:04.508878 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:17:04 crc kubenswrapper[4823]: I0126 17:17:04.509198 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:17:05 crc kubenswrapper[4823]: I0126 17:17:05.392808 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p55v2" event={"ID":"592ec7f8-e80f-467c-be4c-80f4b7fabd3b","Type":"ContainerStarted","Data":"f340e33bdb12416ce6c827750d4a4f2aad762623840f5b63a130d5d6a120358e"} Jan 26 17:17:06 crc kubenswrapper[4823]: I0126 17:17:06.403163 4823 generic.go:334] "Generic (PLEG): container finished" podID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" containerID="f340e33bdb12416ce6c827750d4a4f2aad762623840f5b63a130d5d6a120358e" exitCode=0 Jan 26 17:17:06 crc kubenswrapper[4823]: I0126 17:17:06.403206 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p55v2" event={"ID":"592ec7f8-e80f-467c-be4c-80f4b7fabd3b","Type":"ContainerDied","Data":"f340e33bdb12416ce6c827750d4a4f2aad762623840f5b63a130d5d6a120358e"} Jan 26 17:17:08 crc kubenswrapper[4823]: I0126 17:17:08.427870 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p55v2" event={"ID":"592ec7f8-e80f-467c-be4c-80f4b7fabd3b","Type":"ContainerStarted","Data":"68b727310fafa209605e150a3c8dab5459eabb41d410d4390372ff648b175ee9"} Jan 26 17:17:08 crc kubenswrapper[4823]: I0126 17:17:08.450653 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-p55v2" podStartSLOduration=3.621120878 podStartE2EDuration="7.450634341s" podCreationTimestamp="2026-01-26 17:17:01 +0000 UTC" firstStartedPulling="2026-01-26 17:17:03.36509353 +0000 UTC m=+9020.050556635" lastFinishedPulling="2026-01-26 17:17:07.194606993 +0000 UTC m=+9023.880070098" observedRunningTime="2026-01-26 17:17:08.446679852 +0000 UTC m=+9025.132142967" watchObservedRunningTime="2026-01-26 17:17:08.450634341 +0000 UTC m=+9025.136097446" Jan 26 17:17:12 crc kubenswrapper[4823]: I0126 17:17:12.262628 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:12 crc kubenswrapper[4823]: I0126 17:17:12.263155 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:13 crc kubenswrapper[4823]: I0126 17:17:13.309589 4823 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-p55v2" podUID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" containerName="registry-server" probeResult="failure" output=< Jan 26 17:17:13 crc kubenswrapper[4823]: timeout: failed to connect service ":50051" within 1s Jan 26 17:17:13 crc kubenswrapper[4823]: > Jan 26 17:17:22 crc kubenswrapper[4823]: I0126 17:17:22.326646 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:22 crc kubenswrapper[4823]: I0126 17:17:22.379538 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:22 crc kubenswrapper[4823]: I0126 17:17:22.569050 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p55v2"] Jan 26 17:17:23 crc kubenswrapper[4823]: I0126 17:17:23.568551 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-p55v2" podUID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" containerName="registry-server" containerID="cri-o://68b727310fafa209605e150a3c8dab5459eabb41d410d4390372ff648b175ee9" gracePeriod=2 Jan 26 17:17:26 crc kubenswrapper[4823]: I0126 17:17:26.614848 4823 generic.go:334] "Generic (PLEG): container finished" podID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" containerID="68b727310fafa209605e150a3c8dab5459eabb41d410d4390372ff648b175ee9" exitCode=0 Jan 26 17:17:26 crc kubenswrapper[4823]: I0126 17:17:26.615051 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p55v2" event={"ID":"592ec7f8-e80f-467c-be4c-80f4b7fabd3b","Type":"ContainerDied","Data":"68b727310fafa209605e150a3c8dab5459eabb41d410d4390372ff648b175ee9"} Jan 26 17:17:26 crc kubenswrapper[4823]: I0126 17:17:26.716801 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:26 crc kubenswrapper[4823]: I0126 17:17:26.907078 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-catalog-content\") pod \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\" (UID: \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\") " Jan 26 17:17:26 crc kubenswrapper[4823]: I0126 17:17:26.907163 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-utilities\") pod \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\" (UID: \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\") " Jan 26 17:17:26 crc kubenswrapper[4823]: I0126 17:17:26.907209 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmh5l\" (UniqueName: \"kubernetes.io/projected/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-kube-api-access-qmh5l\") pod \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\" (UID: \"592ec7f8-e80f-467c-be4c-80f4b7fabd3b\") " Jan 26 17:17:26 crc kubenswrapper[4823]: I0126 17:17:26.908028 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-utilities" (OuterVolumeSpecName: "utilities") pod "592ec7f8-e80f-467c-be4c-80f4b7fabd3b" (UID: "592ec7f8-e80f-467c-be4c-80f4b7fabd3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:17:26 crc kubenswrapper[4823]: I0126 17:17:26.915814 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-kube-api-access-qmh5l" (OuterVolumeSpecName: "kube-api-access-qmh5l") pod "592ec7f8-e80f-467c-be4c-80f4b7fabd3b" (UID: "592ec7f8-e80f-467c-be4c-80f4b7fabd3b"). InnerVolumeSpecName "kube-api-access-qmh5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:17:27 crc kubenswrapper[4823]: I0126 17:17:27.009696 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:27 crc kubenswrapper[4823]: I0126 17:17:27.009738 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmh5l\" (UniqueName: \"kubernetes.io/projected/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-kube-api-access-qmh5l\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:27 crc kubenswrapper[4823]: I0126 17:17:27.028818 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "592ec7f8-e80f-467c-be4c-80f4b7fabd3b" (UID: "592ec7f8-e80f-467c-be4c-80f4b7fabd3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:17:27 crc kubenswrapper[4823]: I0126 17:17:27.111895 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/592ec7f8-e80f-467c-be4c-80f4b7fabd3b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:17:27 crc kubenswrapper[4823]: I0126 17:17:27.625870 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-p55v2" event={"ID":"592ec7f8-e80f-467c-be4c-80f4b7fabd3b","Type":"ContainerDied","Data":"6ab5dd9b1a9c27447b4ddc96e77bcfa46cc5c78150abf39dd5a901d7584e0b9d"} Jan 26 17:17:27 crc kubenswrapper[4823]: I0126 17:17:27.626691 4823 scope.go:117] "RemoveContainer" containerID="68b727310fafa209605e150a3c8dab5459eabb41d410d4390372ff648b175ee9" Jan 26 17:17:27 crc kubenswrapper[4823]: I0126 17:17:27.625929 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-p55v2" Jan 26 17:17:27 crc kubenswrapper[4823]: I0126 17:17:27.650199 4823 scope.go:117] "RemoveContainer" containerID="f340e33bdb12416ce6c827750d4a4f2aad762623840f5b63a130d5d6a120358e" Jan 26 17:17:27 crc kubenswrapper[4823]: I0126 17:17:27.657600 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-p55v2"] Jan 26 17:17:27 crc kubenswrapper[4823]: I0126 17:17:27.670971 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-p55v2"] Jan 26 17:17:27 crc kubenswrapper[4823]: I0126 17:17:27.682508 4823 scope.go:117] "RemoveContainer" containerID="451eeae451c1ee1acc9ea9923c0e589a0b13232f096f734bf111f825f09e5d9a" Jan 26 17:17:29 crc kubenswrapper[4823]: I0126 17:17:29.572869 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" path="/var/lib/kubelet/pods/592ec7f8-e80f-467c-be4c-80f4b7fabd3b/volumes" Jan 26 17:17:34 crc kubenswrapper[4823]: I0126 17:17:34.508966 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:17:34 crc kubenswrapper[4823]: I0126 17:17:34.509854 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:17:34 crc kubenswrapper[4823]: I0126 17:17:34.509943 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 17:17:34 crc kubenswrapper[4823]: I0126 17:17:34.511771 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b8e1cf30bb7fbe754bcbf2357ab788b171c49fd55aab1b6611084bd72258b62"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:17:34 crc kubenswrapper[4823]: I0126 17:17:34.512164 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://6b8e1cf30bb7fbe754bcbf2357ab788b171c49fd55aab1b6611084bd72258b62" gracePeriod=600 Jan 26 17:17:35 crc kubenswrapper[4823]: I0126 17:17:35.707161 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="6b8e1cf30bb7fbe754bcbf2357ab788b171c49fd55aab1b6611084bd72258b62" exitCode=0 Jan 26 17:17:35 crc kubenswrapper[4823]: I0126 17:17:35.707356 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"6b8e1cf30bb7fbe754bcbf2357ab788b171c49fd55aab1b6611084bd72258b62"} Jan 26 17:17:35 crc kubenswrapper[4823]: I0126 17:17:35.707967 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407"} Jan 26 17:17:35 crc kubenswrapper[4823]: I0126 17:17:35.707998 4823 scope.go:117] "RemoveContainer" containerID="8fe87a6208436f246c8ff3dca68bb1a9916babdc1b252fa7cafee6ae76583342" Jan 26 17:20:04 crc kubenswrapper[4823]: I0126 17:20:04.508806 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:20:04 crc kubenswrapper[4823]: I0126 17:20:04.509413 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:20:34 crc kubenswrapper[4823]: I0126 17:20:34.507870 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:20:34 crc kubenswrapper[4823]: I0126 17:20:34.508515 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.588488 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8tk6j"] Jan 26 17:20:48 crc kubenswrapper[4823]: E0126 17:20:48.589433 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" containerName="registry-server" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.589450 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" containerName="registry-server" Jan 26 17:20:48 crc kubenswrapper[4823]: E0126 17:20:48.589467 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" containerName="extract-utilities" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.589475 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" containerName="extract-utilities" Jan 26 17:20:48 crc kubenswrapper[4823]: E0126 17:20:48.589499 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" containerName="extract-content" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.589506 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" containerName="extract-content" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.589738 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="592ec7f8-e80f-467c-be4c-80f4b7fabd3b" containerName="registry-server" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.591333 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.610626 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5klg\" (UniqueName: \"kubernetes.io/projected/ab41b681-b812-47c8-b6fe-799efc627a23-kube-api-access-k5klg\") pod \"community-operators-8tk6j\" (UID: \"ab41b681-b812-47c8-b6fe-799efc627a23\") " pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.610751 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8tk6j"] Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.611153 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab41b681-b812-47c8-b6fe-799efc627a23-utilities\") pod \"community-operators-8tk6j\" (UID: \"ab41b681-b812-47c8-b6fe-799efc627a23\") " pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.611235 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab41b681-b812-47c8-b6fe-799efc627a23-catalog-content\") pod \"community-operators-8tk6j\" (UID: \"ab41b681-b812-47c8-b6fe-799efc627a23\") " pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.717067 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab41b681-b812-47c8-b6fe-799efc627a23-utilities\") pod \"community-operators-8tk6j\" (UID: \"ab41b681-b812-47c8-b6fe-799efc627a23\") " pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.717210 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab41b681-b812-47c8-b6fe-799efc627a23-catalog-content\") pod \"community-operators-8tk6j\" (UID: \"ab41b681-b812-47c8-b6fe-799efc627a23\") " pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.717400 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5klg\" (UniqueName: \"kubernetes.io/projected/ab41b681-b812-47c8-b6fe-799efc627a23-kube-api-access-k5klg\") pod \"community-operators-8tk6j\" (UID: \"ab41b681-b812-47c8-b6fe-799efc627a23\") " pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.717884 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab41b681-b812-47c8-b6fe-799efc627a23-utilities\") pod \"community-operators-8tk6j\" (UID: \"ab41b681-b812-47c8-b6fe-799efc627a23\") " pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.718192 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab41b681-b812-47c8-b6fe-799efc627a23-catalog-content\") pod \"community-operators-8tk6j\" (UID: \"ab41b681-b812-47c8-b6fe-799efc627a23\") " pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.755207 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5klg\" (UniqueName: \"kubernetes.io/projected/ab41b681-b812-47c8-b6fe-799efc627a23-kube-api-access-k5klg\") pod \"community-operators-8tk6j\" (UID: \"ab41b681-b812-47c8-b6fe-799efc627a23\") " pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:48 crc kubenswrapper[4823]: I0126 17:20:48.914806 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:49 crc kubenswrapper[4823]: I0126 17:20:49.990203 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8tk6j"] Jan 26 17:20:50 crc kubenswrapper[4823]: I0126 17:20:50.541173 4823 generic.go:334] "Generic (PLEG): container finished" podID="ab41b681-b812-47c8-b6fe-799efc627a23" containerID="8c7a7e724f4b9b6c0aac1ded568ee879e36d16919c9379e369a0955f4de56a4b" exitCode=0 Jan 26 17:20:50 crc kubenswrapper[4823]: I0126 17:20:50.541229 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tk6j" event={"ID":"ab41b681-b812-47c8-b6fe-799efc627a23","Type":"ContainerDied","Data":"8c7a7e724f4b9b6c0aac1ded568ee879e36d16919c9379e369a0955f4de56a4b"} Jan 26 17:20:50 crc kubenswrapper[4823]: I0126 17:20:50.541259 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tk6j" event={"ID":"ab41b681-b812-47c8-b6fe-799efc627a23","Type":"ContainerStarted","Data":"3f9a7db2388c2ccef4214d5611d7e5b7c9dda84158dc50308e5f2c43c18a5914"} Jan 26 17:20:51 crc kubenswrapper[4823]: I0126 17:20:51.549615 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tk6j" event={"ID":"ab41b681-b812-47c8-b6fe-799efc627a23","Type":"ContainerStarted","Data":"1b9b05c3b481737f97c843b77ab88b6e29cfe99008924d2935cedece0b91a4e0"} Jan 26 17:20:52 crc kubenswrapper[4823]: I0126 17:20:52.559413 4823 generic.go:334] "Generic (PLEG): container finished" podID="ab41b681-b812-47c8-b6fe-799efc627a23" containerID="1b9b05c3b481737f97c843b77ab88b6e29cfe99008924d2935cedece0b91a4e0" exitCode=0 Jan 26 17:20:52 crc kubenswrapper[4823]: I0126 17:20:52.559509 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tk6j" event={"ID":"ab41b681-b812-47c8-b6fe-799efc627a23","Type":"ContainerDied","Data":"1b9b05c3b481737f97c843b77ab88b6e29cfe99008924d2935cedece0b91a4e0"} Jan 26 17:20:53 crc kubenswrapper[4823]: I0126 17:20:53.594134 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tk6j" event={"ID":"ab41b681-b812-47c8-b6fe-799efc627a23","Type":"ContainerStarted","Data":"08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a"} Jan 26 17:20:53 crc kubenswrapper[4823]: I0126 17:20:53.625528 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8tk6j" podStartSLOduration=2.870409467 podStartE2EDuration="5.625498321s" podCreationTimestamp="2026-01-26 17:20:48 +0000 UTC" firstStartedPulling="2026-01-26 17:20:50.543262825 +0000 UTC m=+9247.228725930" lastFinishedPulling="2026-01-26 17:20:53.298351679 +0000 UTC m=+9249.983814784" observedRunningTime="2026-01-26 17:20:53.6101353 +0000 UTC m=+9250.295598405" watchObservedRunningTime="2026-01-26 17:20:53.625498321 +0000 UTC m=+9250.310961426" Jan 26 17:20:58 crc kubenswrapper[4823]: I0126 17:20:58.916035 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:58 crc kubenswrapper[4823]: I0126 17:20:58.916296 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:58 crc kubenswrapper[4823]: I0126 17:20:58.963983 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:59 crc kubenswrapper[4823]: I0126 17:20:59.696206 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:20:59 crc kubenswrapper[4823]: I0126 17:20:59.757329 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8tk6j"] Jan 26 17:21:01 crc kubenswrapper[4823]: I0126 17:21:01.658150 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8tk6j" podUID="ab41b681-b812-47c8-b6fe-799efc627a23" containerName="registry-server" containerID="cri-o://08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a" gracePeriod=2 Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.117894 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.306596 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab41b681-b812-47c8-b6fe-799efc627a23-utilities\") pod \"ab41b681-b812-47c8-b6fe-799efc627a23\" (UID: \"ab41b681-b812-47c8-b6fe-799efc627a23\") " Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.307083 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab41b681-b812-47c8-b6fe-799efc627a23-catalog-content\") pod \"ab41b681-b812-47c8-b6fe-799efc627a23\" (UID: \"ab41b681-b812-47c8-b6fe-799efc627a23\") " Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.307208 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5klg\" (UniqueName: \"kubernetes.io/projected/ab41b681-b812-47c8-b6fe-799efc627a23-kube-api-access-k5klg\") pod \"ab41b681-b812-47c8-b6fe-799efc627a23\" (UID: \"ab41b681-b812-47c8-b6fe-799efc627a23\") " Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.307612 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab41b681-b812-47c8-b6fe-799efc627a23-utilities" (OuterVolumeSpecName: "utilities") pod "ab41b681-b812-47c8-b6fe-799efc627a23" (UID: "ab41b681-b812-47c8-b6fe-799efc627a23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.307837 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab41b681-b812-47c8-b6fe-799efc627a23-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.314667 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab41b681-b812-47c8-b6fe-799efc627a23-kube-api-access-k5klg" (OuterVolumeSpecName: "kube-api-access-k5klg") pod "ab41b681-b812-47c8-b6fe-799efc627a23" (UID: "ab41b681-b812-47c8-b6fe-799efc627a23"). InnerVolumeSpecName "kube-api-access-k5klg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.364827 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab41b681-b812-47c8-b6fe-799efc627a23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab41b681-b812-47c8-b6fe-799efc627a23" (UID: "ab41b681-b812-47c8-b6fe-799efc627a23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.409726 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab41b681-b812-47c8-b6fe-799efc627a23-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.409760 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5klg\" (UniqueName: \"kubernetes.io/projected/ab41b681-b812-47c8-b6fe-799efc627a23-kube-api-access-k5klg\") on node \"crc\" DevicePath \"\"" Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.675985 4823 generic.go:334] "Generic (PLEG): container finished" podID="ab41b681-b812-47c8-b6fe-799efc627a23" containerID="08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a" exitCode=0 Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.676044 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tk6j" event={"ID":"ab41b681-b812-47c8-b6fe-799efc627a23","Type":"ContainerDied","Data":"08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a"} Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.676051 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8tk6j" Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.676084 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8tk6j" event={"ID":"ab41b681-b812-47c8-b6fe-799efc627a23","Type":"ContainerDied","Data":"3f9a7db2388c2ccef4214d5611d7e5b7c9dda84158dc50308e5f2c43c18a5914"} Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.676111 4823 scope.go:117] "RemoveContainer" containerID="08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a" Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.722394 4823 scope.go:117] "RemoveContainer" containerID="1b9b05c3b481737f97c843b77ab88b6e29cfe99008924d2935cedece0b91a4e0" Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.735502 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8tk6j"] Jan 26 17:21:02 crc kubenswrapper[4823]: I0126 17:21:02.743071 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8tk6j"] Jan 26 17:21:03 crc kubenswrapper[4823]: I0126 17:21:03.001127 4823 scope.go:117] "RemoveContainer" containerID="8c7a7e724f4b9b6c0aac1ded568ee879e36d16919c9379e369a0955f4de56a4b" Jan 26 17:21:03 crc kubenswrapper[4823]: I0126 17:21:03.058790 4823 scope.go:117] "RemoveContainer" containerID="08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a" Jan 26 17:21:03 crc kubenswrapper[4823]: E0126 17:21:03.059693 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a\": container with ID starting with 08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a not found: ID does not exist" containerID="08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a" Jan 26 17:21:03 crc kubenswrapper[4823]: I0126 17:21:03.059753 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a"} err="failed to get container status \"08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a\": rpc error: code = NotFound desc = could not find container \"08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a\": container with ID starting with 08b8fa58ae3588f7f8487cebc8e2d6283a57bc2c56d65247d8e01e1d70e7863a not found: ID does not exist" Jan 26 17:21:03 crc kubenswrapper[4823]: I0126 17:21:03.059795 4823 scope.go:117] "RemoveContainer" containerID="1b9b05c3b481737f97c843b77ab88b6e29cfe99008924d2935cedece0b91a4e0" Jan 26 17:21:03 crc kubenswrapper[4823]: E0126 17:21:03.060331 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b9b05c3b481737f97c843b77ab88b6e29cfe99008924d2935cedece0b91a4e0\": container with ID starting with 1b9b05c3b481737f97c843b77ab88b6e29cfe99008924d2935cedece0b91a4e0 not found: ID does not exist" containerID="1b9b05c3b481737f97c843b77ab88b6e29cfe99008924d2935cedece0b91a4e0" Jan 26 17:21:03 crc kubenswrapper[4823]: I0126 17:21:03.060382 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b9b05c3b481737f97c843b77ab88b6e29cfe99008924d2935cedece0b91a4e0"} err="failed to get container status \"1b9b05c3b481737f97c843b77ab88b6e29cfe99008924d2935cedece0b91a4e0\": rpc error: code = NotFound desc = could not find container \"1b9b05c3b481737f97c843b77ab88b6e29cfe99008924d2935cedece0b91a4e0\": container with ID starting with 1b9b05c3b481737f97c843b77ab88b6e29cfe99008924d2935cedece0b91a4e0 not found: ID does not exist" Jan 26 17:21:03 crc kubenswrapper[4823]: I0126 17:21:03.060401 4823 scope.go:117] "RemoveContainer" containerID="8c7a7e724f4b9b6c0aac1ded568ee879e36d16919c9379e369a0955f4de56a4b" Jan 26 17:21:03 crc kubenswrapper[4823]: E0126 17:21:03.060721 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c7a7e724f4b9b6c0aac1ded568ee879e36d16919c9379e369a0955f4de56a4b\": container with ID starting with 8c7a7e724f4b9b6c0aac1ded568ee879e36d16919c9379e369a0955f4de56a4b not found: ID does not exist" containerID="8c7a7e724f4b9b6c0aac1ded568ee879e36d16919c9379e369a0955f4de56a4b" Jan 26 17:21:03 crc kubenswrapper[4823]: I0126 17:21:03.060753 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c7a7e724f4b9b6c0aac1ded568ee879e36d16919c9379e369a0955f4de56a4b"} err="failed to get container status \"8c7a7e724f4b9b6c0aac1ded568ee879e36d16919c9379e369a0955f4de56a4b\": rpc error: code = NotFound desc = could not find container \"8c7a7e724f4b9b6c0aac1ded568ee879e36d16919c9379e369a0955f4de56a4b\": container with ID starting with 8c7a7e724f4b9b6c0aac1ded568ee879e36d16919c9379e369a0955f4de56a4b not found: ID does not exist" Jan 26 17:21:03 crc kubenswrapper[4823]: I0126 17:21:03.572577 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab41b681-b812-47c8-b6fe-799efc627a23" path="/var/lib/kubelet/pods/ab41b681-b812-47c8-b6fe-799efc627a23/volumes" Jan 26 17:21:04 crc kubenswrapper[4823]: I0126 17:21:04.507922 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:21:04 crc kubenswrapper[4823]: I0126 17:21:04.508239 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:21:04 crc kubenswrapper[4823]: I0126 17:21:04.508302 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 17:21:04 crc kubenswrapper[4823]: I0126 17:21:04.509260 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:21:04 crc kubenswrapper[4823]: I0126 17:21:04.509354 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" gracePeriod=600 Jan 26 17:21:04 crc kubenswrapper[4823]: E0126 17:21:04.662840 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:21:04 crc kubenswrapper[4823]: I0126 17:21:04.694408 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" exitCode=0 Jan 26 17:21:04 crc kubenswrapper[4823]: I0126 17:21:04.694459 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407"} Jan 26 17:21:04 crc kubenswrapper[4823]: I0126 17:21:04.694504 4823 scope.go:117] "RemoveContainer" containerID="6b8e1cf30bb7fbe754bcbf2357ab788b171c49fd55aab1b6611084bd72258b62" Jan 26 17:21:04 crc kubenswrapper[4823]: I0126 17:21:04.695160 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:21:04 crc kubenswrapper[4823]: E0126 17:21:04.695486 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:21:16 crc kubenswrapper[4823]: I0126 17:21:16.561005 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:21:16 crc kubenswrapper[4823]: E0126 17:21:16.561815 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:21:30 crc kubenswrapper[4823]: I0126 17:21:30.559910 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:21:30 crc kubenswrapper[4823]: E0126 17:21:30.560814 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:21:41 crc kubenswrapper[4823]: I0126 17:21:41.561063 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:21:41 crc kubenswrapper[4823]: E0126 17:21:41.561865 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:21:56 crc kubenswrapper[4823]: I0126 17:21:56.560699 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:21:56 crc kubenswrapper[4823]: E0126 17:21:56.561549 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:22:08 crc kubenswrapper[4823]: I0126 17:22:08.560836 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:22:08 crc kubenswrapper[4823]: E0126 17:22:08.561754 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:22:22 crc kubenswrapper[4823]: I0126 17:22:22.561117 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:22:22 crc kubenswrapper[4823]: E0126 17:22:22.562215 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:22:36 crc kubenswrapper[4823]: I0126 17:22:36.561806 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:22:36 crc kubenswrapper[4823]: E0126 17:22:36.562691 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:22:48 crc kubenswrapper[4823]: I0126 17:22:48.560943 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:22:48 crc kubenswrapper[4823]: E0126 17:22:48.561818 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:23:00 crc kubenswrapper[4823]: I0126 17:23:00.560605 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:23:00 crc kubenswrapper[4823]: E0126 17:23:00.561482 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:23:06 crc kubenswrapper[4823]: I0126 17:23:06.766209 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qngvh"] Jan 26 17:23:06 crc kubenswrapper[4823]: E0126 17:23:06.768033 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab41b681-b812-47c8-b6fe-799efc627a23" containerName="extract-utilities" Jan 26 17:23:06 crc kubenswrapper[4823]: I0126 17:23:06.768049 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab41b681-b812-47c8-b6fe-799efc627a23" containerName="extract-utilities" Jan 26 17:23:06 crc kubenswrapper[4823]: E0126 17:23:06.768060 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab41b681-b812-47c8-b6fe-799efc627a23" containerName="registry-server" Jan 26 17:23:06 crc kubenswrapper[4823]: I0126 17:23:06.768066 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab41b681-b812-47c8-b6fe-799efc627a23" containerName="registry-server" Jan 26 17:23:06 crc kubenswrapper[4823]: E0126 17:23:06.768079 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab41b681-b812-47c8-b6fe-799efc627a23" containerName="extract-content" Jan 26 17:23:06 crc kubenswrapper[4823]: I0126 17:23:06.768085 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab41b681-b812-47c8-b6fe-799efc627a23" containerName="extract-content" Jan 26 17:23:06 crc kubenswrapper[4823]: I0126 17:23:06.768272 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab41b681-b812-47c8-b6fe-799efc627a23" containerName="registry-server" Jan 26 17:23:06 crc kubenswrapper[4823]: I0126 17:23:06.769668 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:06 crc kubenswrapper[4823]: I0126 17:23:06.783833 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qngvh"] Jan 26 17:23:06 crc kubenswrapper[4823]: I0126 17:23:06.917668 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34655fcf-06c6-4e25-89ff-44ae9974fb63-catalog-content\") pod \"certified-operators-qngvh\" (UID: \"34655fcf-06c6-4e25-89ff-44ae9974fb63\") " pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:06 crc kubenswrapper[4823]: I0126 17:23:06.917806 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qktlq\" (UniqueName: \"kubernetes.io/projected/34655fcf-06c6-4e25-89ff-44ae9974fb63-kube-api-access-qktlq\") pod \"certified-operators-qngvh\" (UID: \"34655fcf-06c6-4e25-89ff-44ae9974fb63\") " pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:06 crc kubenswrapper[4823]: I0126 17:23:06.917853 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34655fcf-06c6-4e25-89ff-44ae9974fb63-utilities\") pod \"certified-operators-qngvh\" (UID: \"34655fcf-06c6-4e25-89ff-44ae9974fb63\") " pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:07 crc kubenswrapper[4823]: I0126 17:23:07.019955 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qktlq\" (UniqueName: \"kubernetes.io/projected/34655fcf-06c6-4e25-89ff-44ae9974fb63-kube-api-access-qktlq\") pod \"certified-operators-qngvh\" (UID: \"34655fcf-06c6-4e25-89ff-44ae9974fb63\") " pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:07 crc kubenswrapper[4823]: I0126 17:23:07.020023 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34655fcf-06c6-4e25-89ff-44ae9974fb63-utilities\") pod \"certified-operators-qngvh\" (UID: \"34655fcf-06c6-4e25-89ff-44ae9974fb63\") " pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:07 crc kubenswrapper[4823]: I0126 17:23:07.020132 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34655fcf-06c6-4e25-89ff-44ae9974fb63-catalog-content\") pod \"certified-operators-qngvh\" (UID: \"34655fcf-06c6-4e25-89ff-44ae9974fb63\") " pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:07 crc kubenswrapper[4823]: I0126 17:23:07.020627 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34655fcf-06c6-4e25-89ff-44ae9974fb63-utilities\") pod \"certified-operators-qngvh\" (UID: \"34655fcf-06c6-4e25-89ff-44ae9974fb63\") " pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:07 crc kubenswrapper[4823]: I0126 17:23:07.020659 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34655fcf-06c6-4e25-89ff-44ae9974fb63-catalog-content\") pod \"certified-operators-qngvh\" (UID: \"34655fcf-06c6-4e25-89ff-44ae9974fb63\") " pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:07 crc kubenswrapper[4823]: I0126 17:23:07.044954 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qktlq\" (UniqueName: \"kubernetes.io/projected/34655fcf-06c6-4e25-89ff-44ae9974fb63-kube-api-access-qktlq\") pod \"certified-operators-qngvh\" (UID: \"34655fcf-06c6-4e25-89ff-44ae9974fb63\") " pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:07 crc kubenswrapper[4823]: I0126 17:23:07.100938 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:07 crc kubenswrapper[4823]: I0126 17:23:07.610337 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qngvh"] Jan 26 17:23:07 crc kubenswrapper[4823]: I0126 17:23:07.748104 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngvh" event={"ID":"34655fcf-06c6-4e25-89ff-44ae9974fb63","Type":"ContainerStarted","Data":"22f8bc2fa169a8beaa4de26c8aec42bc83f948ad606bbd4ea193df226ab7ed60"} Jan 26 17:23:08 crc kubenswrapper[4823]: I0126 17:23:08.756814 4823 generic.go:334] "Generic (PLEG): container finished" podID="34655fcf-06c6-4e25-89ff-44ae9974fb63" containerID="b3ebce1e3366aca526ee91a5df0c189b830195fb2444243360e171cf1882391a" exitCode=0 Jan 26 17:23:08 crc kubenswrapper[4823]: I0126 17:23:08.756873 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngvh" event={"ID":"34655fcf-06c6-4e25-89ff-44ae9974fb63","Type":"ContainerDied","Data":"b3ebce1e3366aca526ee91a5df0c189b830195fb2444243360e171cf1882391a"} Jan 26 17:23:08 crc kubenswrapper[4823]: I0126 17:23:08.759342 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:23:09 crc kubenswrapper[4823]: I0126 17:23:09.827456 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9b85d"] Jan 26 17:23:09 crc kubenswrapper[4823]: I0126 17:23:09.841865 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b85d"] Jan 26 17:23:09 crc kubenswrapper[4823]: I0126 17:23:09.842021 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:09 crc kubenswrapper[4823]: I0126 17:23:09.937498 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8543827-a9a9-4941-a676-8d20b588f896-utilities\") pod \"redhat-marketplace-9b85d\" (UID: \"b8543827-a9a9-4941-a676-8d20b588f896\") " pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:09 crc kubenswrapper[4823]: I0126 17:23:09.937662 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v4wv\" (UniqueName: \"kubernetes.io/projected/b8543827-a9a9-4941-a676-8d20b588f896-kube-api-access-4v4wv\") pod \"redhat-marketplace-9b85d\" (UID: \"b8543827-a9a9-4941-a676-8d20b588f896\") " pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:09 crc kubenswrapper[4823]: I0126 17:23:09.937735 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8543827-a9a9-4941-a676-8d20b588f896-catalog-content\") pod \"redhat-marketplace-9b85d\" (UID: \"b8543827-a9a9-4941-a676-8d20b588f896\") " pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:10 crc kubenswrapper[4823]: I0126 17:23:10.039663 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v4wv\" (UniqueName: \"kubernetes.io/projected/b8543827-a9a9-4941-a676-8d20b588f896-kube-api-access-4v4wv\") pod \"redhat-marketplace-9b85d\" (UID: \"b8543827-a9a9-4941-a676-8d20b588f896\") " pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:10 crc kubenswrapper[4823]: I0126 17:23:10.039763 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8543827-a9a9-4941-a676-8d20b588f896-catalog-content\") pod \"redhat-marketplace-9b85d\" (UID: \"b8543827-a9a9-4941-a676-8d20b588f896\") " pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:10 crc kubenswrapper[4823]: I0126 17:23:10.039837 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8543827-a9a9-4941-a676-8d20b588f896-utilities\") pod \"redhat-marketplace-9b85d\" (UID: \"b8543827-a9a9-4941-a676-8d20b588f896\") " pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:10 crc kubenswrapper[4823]: I0126 17:23:10.040584 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8543827-a9a9-4941-a676-8d20b588f896-catalog-content\") pod \"redhat-marketplace-9b85d\" (UID: \"b8543827-a9a9-4941-a676-8d20b588f896\") " pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:10 crc kubenswrapper[4823]: I0126 17:23:10.042281 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8543827-a9a9-4941-a676-8d20b588f896-utilities\") pod \"redhat-marketplace-9b85d\" (UID: \"b8543827-a9a9-4941-a676-8d20b588f896\") " pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:10 crc kubenswrapper[4823]: I0126 17:23:10.058336 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v4wv\" (UniqueName: \"kubernetes.io/projected/b8543827-a9a9-4941-a676-8d20b588f896-kube-api-access-4v4wv\") pod \"redhat-marketplace-9b85d\" (UID: \"b8543827-a9a9-4941-a676-8d20b588f896\") " pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:10 crc kubenswrapper[4823]: I0126 17:23:10.184611 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:10 crc kubenswrapper[4823]: I0126 17:23:10.716019 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b85d"] Jan 26 17:23:10 crc kubenswrapper[4823]: I0126 17:23:10.845153 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b85d" event={"ID":"b8543827-a9a9-4941-a676-8d20b588f896","Type":"ContainerStarted","Data":"9756f098d78f0c2b0ffe4126efbc98dd06aadb67c5587f03525b8f9022d0b5c4"} Jan 26 17:23:11 crc kubenswrapper[4823]: I0126 17:23:11.858581 4823 generic.go:334] "Generic (PLEG): container finished" podID="b8543827-a9a9-4941-a676-8d20b588f896" containerID="0ebd98ab62f2eb4f49e7d6d867929660dc854e5274078def06279ad9c3b34558" exitCode=0 Jan 26 17:23:11 crc kubenswrapper[4823]: I0126 17:23:11.858646 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b85d" event={"ID":"b8543827-a9a9-4941-a676-8d20b588f896","Type":"ContainerDied","Data":"0ebd98ab62f2eb4f49e7d6d867929660dc854e5274078def06279ad9c3b34558"} Jan 26 17:23:13 crc kubenswrapper[4823]: I0126 17:23:13.567674 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:23:13 crc kubenswrapper[4823]: E0126 17:23:13.568344 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:23:14 crc kubenswrapper[4823]: I0126 17:23:14.887309 4823 generic.go:334] "Generic (PLEG): container finished" podID="34655fcf-06c6-4e25-89ff-44ae9974fb63" containerID="ab5224740e49fd13e0da84e8b0f0efc07b40c2aeb96435dc5db7084f97846a07" exitCode=0 Jan 26 17:23:14 crc kubenswrapper[4823]: I0126 17:23:14.887405 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngvh" event={"ID":"34655fcf-06c6-4e25-89ff-44ae9974fb63","Type":"ContainerDied","Data":"ab5224740e49fd13e0da84e8b0f0efc07b40c2aeb96435dc5db7084f97846a07"} Jan 26 17:23:15 crc kubenswrapper[4823]: I0126 17:23:15.900544 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qngvh" event={"ID":"34655fcf-06c6-4e25-89ff-44ae9974fb63","Type":"ContainerStarted","Data":"2768afc05a8c15d2e3f439f1367a8b5fcb830629376605d4b042e75be334f527"} Jan 26 17:23:15 crc kubenswrapper[4823]: I0126 17:23:15.902526 4823 generic.go:334] "Generic (PLEG): container finished" podID="b8543827-a9a9-4941-a676-8d20b588f896" containerID="16ee46bed5760c2f44b970ce035ff8ecd6d9090886a1629c7fae82929ba051cf" exitCode=0 Jan 26 17:23:15 crc kubenswrapper[4823]: I0126 17:23:15.902583 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b85d" event={"ID":"b8543827-a9a9-4941-a676-8d20b588f896","Type":"ContainerDied","Data":"16ee46bed5760c2f44b970ce035ff8ecd6d9090886a1629c7fae82929ba051cf"} Jan 26 17:23:15 crc kubenswrapper[4823]: I0126 17:23:15.927647 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qngvh" podStartSLOduration=3.403072655 podStartE2EDuration="9.927627459s" podCreationTimestamp="2026-01-26 17:23:06 +0000 UTC" firstStartedPulling="2026-01-26 17:23:08.759104795 +0000 UTC m=+9385.444567900" lastFinishedPulling="2026-01-26 17:23:15.283659599 +0000 UTC m=+9391.969122704" observedRunningTime="2026-01-26 17:23:15.91820611 +0000 UTC m=+9392.603669226" watchObservedRunningTime="2026-01-26 17:23:15.927627459 +0000 UTC m=+9392.613090564" Jan 26 17:23:16 crc kubenswrapper[4823]: I0126 17:23:16.915303 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b85d" event={"ID":"b8543827-a9a9-4941-a676-8d20b588f896","Type":"ContainerStarted","Data":"4f4809de5934d07970fdc33ce1050ca7fa18e93563d927b9371980c41a2fec23"} Jan 26 17:23:16 crc kubenswrapper[4823]: I0126 17:23:16.942010 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9b85d" podStartSLOduration=5.595045278 podStartE2EDuration="7.941989744s" podCreationTimestamp="2026-01-26 17:23:09 +0000 UTC" firstStartedPulling="2026-01-26 17:23:14.013952487 +0000 UTC m=+9390.699415592" lastFinishedPulling="2026-01-26 17:23:16.360896953 +0000 UTC m=+9393.046360058" observedRunningTime="2026-01-26 17:23:16.934048166 +0000 UTC m=+9393.619511281" watchObservedRunningTime="2026-01-26 17:23:16.941989744 +0000 UTC m=+9393.627452849" Jan 26 17:23:17 crc kubenswrapper[4823]: I0126 17:23:17.101911 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:17 crc kubenswrapper[4823]: I0126 17:23:17.102110 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:17 crc kubenswrapper[4823]: I0126 17:23:17.157444 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:20 crc kubenswrapper[4823]: I0126 17:23:20.186821 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:20 crc kubenswrapper[4823]: I0126 17:23:20.187453 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:20 crc kubenswrapper[4823]: I0126 17:23:20.241069 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:25 crc kubenswrapper[4823]: I0126 17:23:25.560614 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:23:25 crc kubenswrapper[4823]: E0126 17:23:25.561504 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.154501 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qngvh" Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.223402 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qngvh"] Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.283078 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q2245"] Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.283396 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q2245" podUID="0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" containerName="registry-server" containerID="cri-o://5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b" gracePeriod=2 Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.818608 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2245" Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.871793 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-utilities\") pod \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\" (UID: \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\") " Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.871915 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2zcf\" (UniqueName: \"kubernetes.io/projected/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-kube-api-access-v2zcf\") pod \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\" (UID: \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\") " Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.871985 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-catalog-content\") pod \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\" (UID: \"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7\") " Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.872555 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-utilities" (OuterVolumeSpecName: "utilities") pod "0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" (UID: "0e6ed00e-cd32-43aa-b8e9-f5082085c7c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.881920 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-kube-api-access-v2zcf" (OuterVolumeSpecName: "kube-api-access-v2zcf") pod "0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" (UID: "0e6ed00e-cd32-43aa-b8e9-f5082085c7c7"). InnerVolumeSpecName "kube-api-access-v2zcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.946153 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" (UID: "0e6ed00e-cd32-43aa-b8e9-f5082085c7c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.974907 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.974960 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2zcf\" (UniqueName: \"kubernetes.io/projected/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-kube-api-access-v2zcf\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:27 crc kubenswrapper[4823]: I0126 17:23:27.974979 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.013539 4823 generic.go:334] "Generic (PLEG): container finished" podID="0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" containerID="5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b" exitCode=0 Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.014563 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q2245" Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.019567 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2245" event={"ID":"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7","Type":"ContainerDied","Data":"5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b"} Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.019653 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q2245" event={"ID":"0e6ed00e-cd32-43aa-b8e9-f5082085c7c7","Type":"ContainerDied","Data":"cfc93bdcacb7fcdc8127ea5f2cb752176bc55b512ccfb39c89eaa495615ddc32"} Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.019681 4823 scope.go:117] "RemoveContainer" containerID="5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b" Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.044195 4823 scope.go:117] "RemoveContainer" containerID="cbb2e29a2957277220e085d73396799ee5f6494f378c8e2b9cda93c800e35e80" Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.052424 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q2245"] Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.068271 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q2245"] Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.147382 4823 scope.go:117] "RemoveContainer" containerID="53c452e316752338e0697a4039fb62873768562ed1b505c5ed0b10fc56af77c4" Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.172294 4823 scope.go:117] "RemoveContainer" containerID="5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b" Jan 26 17:23:28 crc kubenswrapper[4823]: E0126 17:23:28.172772 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b\": container with ID starting with 5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b not found: ID does not exist" containerID="5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b" Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.172812 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b"} err="failed to get container status \"5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b\": rpc error: code = NotFound desc = could not find container \"5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b\": container with ID starting with 5ee2a5e06fb2abd522f635e0854e006b5ad3f00bbe22197026b4f0531f58788b not found: ID does not exist" Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.172839 4823 scope.go:117] "RemoveContainer" containerID="cbb2e29a2957277220e085d73396799ee5f6494f378c8e2b9cda93c800e35e80" Jan 26 17:23:28 crc kubenswrapper[4823]: E0126 17:23:28.173191 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbb2e29a2957277220e085d73396799ee5f6494f378c8e2b9cda93c800e35e80\": container with ID starting with cbb2e29a2957277220e085d73396799ee5f6494f378c8e2b9cda93c800e35e80 not found: ID does not exist" containerID="cbb2e29a2957277220e085d73396799ee5f6494f378c8e2b9cda93c800e35e80" Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.173222 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbb2e29a2957277220e085d73396799ee5f6494f378c8e2b9cda93c800e35e80"} err="failed to get container status \"cbb2e29a2957277220e085d73396799ee5f6494f378c8e2b9cda93c800e35e80\": rpc error: code = NotFound desc = could not find container \"cbb2e29a2957277220e085d73396799ee5f6494f378c8e2b9cda93c800e35e80\": container with ID starting with cbb2e29a2957277220e085d73396799ee5f6494f378c8e2b9cda93c800e35e80 not found: ID does not exist" Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.173247 4823 scope.go:117] "RemoveContainer" containerID="53c452e316752338e0697a4039fb62873768562ed1b505c5ed0b10fc56af77c4" Jan 26 17:23:28 crc kubenswrapper[4823]: E0126 17:23:28.173625 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53c452e316752338e0697a4039fb62873768562ed1b505c5ed0b10fc56af77c4\": container with ID starting with 53c452e316752338e0697a4039fb62873768562ed1b505c5ed0b10fc56af77c4 not found: ID does not exist" containerID="53c452e316752338e0697a4039fb62873768562ed1b505c5ed0b10fc56af77c4" Jan 26 17:23:28 crc kubenswrapper[4823]: I0126 17:23:28.173669 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53c452e316752338e0697a4039fb62873768562ed1b505c5ed0b10fc56af77c4"} err="failed to get container status \"53c452e316752338e0697a4039fb62873768562ed1b505c5ed0b10fc56af77c4\": rpc error: code = NotFound desc = could not find container \"53c452e316752338e0697a4039fb62873768562ed1b505c5ed0b10fc56af77c4\": container with ID starting with 53c452e316752338e0697a4039fb62873768562ed1b505c5ed0b10fc56af77c4 not found: ID does not exist" Jan 26 17:23:29 crc kubenswrapper[4823]: I0126 17:23:29.569914 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" path="/var/lib/kubelet/pods/0e6ed00e-cd32-43aa-b8e9-f5082085c7c7/volumes" Jan 26 17:23:30 crc kubenswrapper[4823]: I0126 17:23:30.237417 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:32 crc kubenswrapper[4823]: I0126 17:23:32.592980 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b85d"] Jan 26 17:23:32 crc kubenswrapper[4823]: I0126 17:23:32.593559 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9b85d" podUID="b8543827-a9a9-4941-a676-8d20b588f896" containerName="registry-server" containerID="cri-o://4f4809de5934d07970fdc33ce1050ca7fa18e93563d927b9371980c41a2fec23" gracePeriod=2 Jan 26 17:23:34 crc kubenswrapper[4823]: I0126 17:23:34.071958 4823 generic.go:334] "Generic (PLEG): container finished" podID="b8543827-a9a9-4941-a676-8d20b588f896" containerID="4f4809de5934d07970fdc33ce1050ca7fa18e93563d927b9371980c41a2fec23" exitCode=0 Jan 26 17:23:34 crc kubenswrapper[4823]: I0126 17:23:34.072047 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b85d" event={"ID":"b8543827-a9a9-4941-a676-8d20b588f896","Type":"ContainerDied","Data":"4f4809de5934d07970fdc33ce1050ca7fa18e93563d927b9371980c41a2fec23"} Jan 26 17:23:34 crc kubenswrapper[4823]: I0126 17:23:34.540133 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:34 crc kubenswrapper[4823]: I0126 17:23:34.612519 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8543827-a9a9-4941-a676-8d20b588f896-utilities\") pod \"b8543827-a9a9-4941-a676-8d20b588f896\" (UID: \"b8543827-a9a9-4941-a676-8d20b588f896\") " Jan 26 17:23:34 crc kubenswrapper[4823]: I0126 17:23:34.612578 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8543827-a9a9-4941-a676-8d20b588f896-catalog-content\") pod \"b8543827-a9a9-4941-a676-8d20b588f896\" (UID: \"b8543827-a9a9-4941-a676-8d20b588f896\") " Jan 26 17:23:34 crc kubenswrapper[4823]: I0126 17:23:34.612650 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v4wv\" (UniqueName: \"kubernetes.io/projected/b8543827-a9a9-4941-a676-8d20b588f896-kube-api-access-4v4wv\") pod \"b8543827-a9a9-4941-a676-8d20b588f896\" (UID: \"b8543827-a9a9-4941-a676-8d20b588f896\") " Jan 26 17:23:34 crc kubenswrapper[4823]: I0126 17:23:34.613727 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8543827-a9a9-4941-a676-8d20b588f896-utilities" (OuterVolumeSpecName: "utilities") pod "b8543827-a9a9-4941-a676-8d20b588f896" (UID: "b8543827-a9a9-4941-a676-8d20b588f896"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:23:34 crc kubenswrapper[4823]: I0126 17:23:34.620653 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8543827-a9a9-4941-a676-8d20b588f896-kube-api-access-4v4wv" (OuterVolumeSpecName: "kube-api-access-4v4wv") pod "b8543827-a9a9-4941-a676-8d20b588f896" (UID: "b8543827-a9a9-4941-a676-8d20b588f896"). InnerVolumeSpecName "kube-api-access-4v4wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:23:34 crc kubenswrapper[4823]: I0126 17:23:34.640415 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8543827-a9a9-4941-a676-8d20b588f896-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b8543827-a9a9-4941-a676-8d20b588f896" (UID: "b8543827-a9a9-4941-a676-8d20b588f896"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:23:34 crc kubenswrapper[4823]: I0126 17:23:34.715634 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b8543827-a9a9-4941-a676-8d20b588f896-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:34 crc kubenswrapper[4823]: I0126 17:23:34.715691 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b8543827-a9a9-4941-a676-8d20b588f896-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:34 crc kubenswrapper[4823]: I0126 17:23:34.715705 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v4wv\" (UniqueName: \"kubernetes.io/projected/b8543827-a9a9-4941-a676-8d20b588f896-kube-api-access-4v4wv\") on node \"crc\" DevicePath \"\"" Jan 26 17:23:35 crc kubenswrapper[4823]: I0126 17:23:35.088099 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b85d" event={"ID":"b8543827-a9a9-4941-a676-8d20b588f896","Type":"ContainerDied","Data":"9756f098d78f0c2b0ffe4126efbc98dd06aadb67c5587f03525b8f9022d0b5c4"} Jan 26 17:23:35 crc kubenswrapper[4823]: I0126 17:23:35.088179 4823 scope.go:117] "RemoveContainer" containerID="4f4809de5934d07970fdc33ce1050ca7fa18e93563d927b9371980c41a2fec23" Jan 26 17:23:35 crc kubenswrapper[4823]: I0126 17:23:35.088314 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b85d" Jan 26 17:23:35 crc kubenswrapper[4823]: I0126 17:23:35.109505 4823 scope.go:117] "RemoveContainer" containerID="16ee46bed5760c2f44b970ce035ff8ecd6d9090886a1629c7fae82929ba051cf" Jan 26 17:23:35 crc kubenswrapper[4823]: I0126 17:23:35.124437 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b85d"] Jan 26 17:23:35 crc kubenswrapper[4823]: I0126 17:23:35.135507 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b85d"] Jan 26 17:23:35 crc kubenswrapper[4823]: I0126 17:23:35.156839 4823 scope.go:117] "RemoveContainer" containerID="0ebd98ab62f2eb4f49e7d6d867929660dc854e5274078def06279ad9c3b34558" Jan 26 17:23:35 crc kubenswrapper[4823]: I0126 17:23:35.573146 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8543827-a9a9-4941-a676-8d20b588f896" path="/var/lib/kubelet/pods/b8543827-a9a9-4941-a676-8d20b588f896/volumes" Jan 26 17:23:38 crc kubenswrapper[4823]: I0126 17:23:38.560318 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:23:38 crc kubenswrapper[4823]: E0126 17:23:38.561353 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:23:53 crc kubenswrapper[4823]: I0126 17:23:53.568844 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:23:53 crc kubenswrapper[4823]: E0126 17:23:53.569547 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:24:07 crc kubenswrapper[4823]: I0126 17:24:07.560925 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:24:07 crc kubenswrapper[4823]: E0126 17:24:07.561597 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:24:20 crc kubenswrapper[4823]: I0126 17:24:20.560333 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:24:20 crc kubenswrapper[4823]: E0126 17:24:20.561112 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:24:32 crc kubenswrapper[4823]: I0126 17:24:32.560719 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:24:32 crc kubenswrapper[4823]: E0126 17:24:32.562285 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:24:47 crc kubenswrapper[4823]: I0126 17:24:47.561465 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:24:47 crc kubenswrapper[4823]: E0126 17:24:47.562157 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:25:00 crc kubenswrapper[4823]: I0126 17:25:00.560244 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:25:00 crc kubenswrapper[4823]: E0126 17:25:00.561005 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:25:14 crc kubenswrapper[4823]: I0126 17:25:14.561693 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:25:14 crc kubenswrapper[4823]: E0126 17:25:14.562628 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:25:26 crc kubenswrapper[4823]: I0126 17:25:26.560539 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:25:26 crc kubenswrapper[4823]: E0126 17:25:26.562037 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:25:40 crc kubenswrapper[4823]: I0126 17:25:40.560532 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:25:40 crc kubenswrapper[4823]: E0126 17:25:40.561289 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:25:53 crc kubenswrapper[4823]: I0126 17:25:53.569067 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:25:53 crc kubenswrapper[4823]: E0126 17:25:53.572119 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:26:06 crc kubenswrapper[4823]: I0126 17:26:06.561169 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:26:07 crc kubenswrapper[4823]: I0126 17:26:07.603770 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"3ee64b2317bf7c0af97daed171c60a01618577c5b2362fbb06dbc12f26a2e844"} Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.133421 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-sfw6r"] Jan 26 17:28:25 crc kubenswrapper[4823]: E0126 17:28:25.134938 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" containerName="extract-content" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.134961 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" containerName="extract-content" Jan 26 17:28:25 crc kubenswrapper[4823]: E0126 17:28:25.134974 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" containerName="registry-server" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.134981 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" containerName="registry-server" Jan 26 17:28:25 crc kubenswrapper[4823]: E0126 17:28:25.134998 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" containerName="extract-utilities" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.135006 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" containerName="extract-utilities" Jan 26 17:28:25 crc kubenswrapper[4823]: E0126 17:28:25.135013 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8543827-a9a9-4941-a676-8d20b588f896" containerName="extract-utilities" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.135022 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8543827-a9a9-4941-a676-8d20b588f896" containerName="extract-utilities" Jan 26 17:28:25 crc kubenswrapper[4823]: E0126 17:28:25.135042 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8543827-a9a9-4941-a676-8d20b588f896" containerName="registry-server" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.135047 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8543827-a9a9-4941-a676-8d20b588f896" containerName="registry-server" Jan 26 17:28:25 crc kubenswrapper[4823]: E0126 17:28:25.135060 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8543827-a9a9-4941-a676-8d20b588f896" containerName="extract-content" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.135066 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8543827-a9a9-4941-a676-8d20b588f896" containerName="extract-content" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.135240 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e6ed00e-cd32-43aa-b8e9-f5082085c7c7" containerName="registry-server" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.135253 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8543827-a9a9-4941-a676-8d20b588f896" containerName="registry-server" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.136722 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.149465 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sfw6r"] Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.211421 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96794203-e0fa-4f22-84df-6ae5fa16f6f3-catalog-content\") pod \"redhat-operators-sfw6r\" (UID: \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\") " pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.212007 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk2tg\" (UniqueName: \"kubernetes.io/projected/96794203-e0fa-4f22-84df-6ae5fa16f6f3-kube-api-access-sk2tg\") pod \"redhat-operators-sfw6r\" (UID: \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\") " pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.212081 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96794203-e0fa-4f22-84df-6ae5fa16f6f3-utilities\") pod \"redhat-operators-sfw6r\" (UID: \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\") " pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.314623 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96794203-e0fa-4f22-84df-6ae5fa16f6f3-utilities\") pod \"redhat-operators-sfw6r\" (UID: \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\") " pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.314726 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96794203-e0fa-4f22-84df-6ae5fa16f6f3-catalog-content\") pod \"redhat-operators-sfw6r\" (UID: \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\") " pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.314883 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk2tg\" (UniqueName: \"kubernetes.io/projected/96794203-e0fa-4f22-84df-6ae5fa16f6f3-kube-api-access-sk2tg\") pod \"redhat-operators-sfw6r\" (UID: \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\") " pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.315764 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96794203-e0fa-4f22-84df-6ae5fa16f6f3-utilities\") pod \"redhat-operators-sfw6r\" (UID: \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\") " pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.315778 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96794203-e0fa-4f22-84df-6ae5fa16f6f3-catalog-content\") pod \"redhat-operators-sfw6r\" (UID: \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\") " pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.338027 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk2tg\" (UniqueName: \"kubernetes.io/projected/96794203-e0fa-4f22-84df-6ae5fa16f6f3-kube-api-access-sk2tg\") pod \"redhat-operators-sfw6r\" (UID: \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\") " pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:25 crc kubenswrapper[4823]: I0126 17:28:25.462797 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:26 crc kubenswrapper[4823]: I0126 17:28:26.001610 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-sfw6r"] Jan 26 17:28:26 crc kubenswrapper[4823]: I0126 17:28:26.886624 4823 generic.go:334] "Generic (PLEG): container finished" podID="96794203-e0fa-4f22-84df-6ae5fa16f6f3" containerID="4ca62a1a5858cd56f6d803191f0339e7b2e3207b41ca1f595e260ed78f0ce3ec" exitCode=0 Jan 26 17:28:26 crc kubenswrapper[4823]: I0126 17:28:26.887686 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfw6r" event={"ID":"96794203-e0fa-4f22-84df-6ae5fa16f6f3","Type":"ContainerDied","Data":"4ca62a1a5858cd56f6d803191f0339e7b2e3207b41ca1f595e260ed78f0ce3ec"} Jan 26 17:28:26 crc kubenswrapper[4823]: I0126 17:28:26.887767 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfw6r" event={"ID":"96794203-e0fa-4f22-84df-6ae5fa16f6f3","Type":"ContainerStarted","Data":"cdffbd541ed2912afb97f56e3a10cbd949ded92401283995e7718ea76135738f"} Jan 26 17:28:26 crc kubenswrapper[4823]: I0126 17:28:26.889917 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:28:27 crc kubenswrapper[4823]: I0126 17:28:27.897625 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfw6r" event={"ID":"96794203-e0fa-4f22-84df-6ae5fa16f6f3","Type":"ContainerStarted","Data":"8c74a1e02e010e75c99c1c164926ed672b776db8db067bb62a3a739cc19eb07a"} Jan 26 17:28:28 crc kubenswrapper[4823]: I0126 17:28:28.907555 4823 generic.go:334] "Generic (PLEG): container finished" podID="96794203-e0fa-4f22-84df-6ae5fa16f6f3" containerID="8c74a1e02e010e75c99c1c164926ed672b776db8db067bb62a3a739cc19eb07a" exitCode=0 Jan 26 17:28:28 crc kubenswrapper[4823]: I0126 17:28:28.907918 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfw6r" event={"ID":"96794203-e0fa-4f22-84df-6ae5fa16f6f3","Type":"ContainerDied","Data":"8c74a1e02e010e75c99c1c164926ed672b776db8db067bb62a3a739cc19eb07a"} Jan 26 17:28:29 crc kubenswrapper[4823]: I0126 17:28:29.917844 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfw6r" event={"ID":"96794203-e0fa-4f22-84df-6ae5fa16f6f3","Type":"ContainerStarted","Data":"8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1"} Jan 26 17:28:29 crc kubenswrapper[4823]: I0126 17:28:29.944266 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-sfw6r" podStartSLOduration=2.493649361 podStartE2EDuration="4.944248759s" podCreationTimestamp="2026-01-26 17:28:25 +0000 UTC" firstStartedPulling="2026-01-26 17:28:26.88964201 +0000 UTC m=+9703.575105115" lastFinishedPulling="2026-01-26 17:28:29.340241418 +0000 UTC m=+9706.025704513" observedRunningTime="2026-01-26 17:28:29.934611607 +0000 UTC m=+9706.620074712" watchObservedRunningTime="2026-01-26 17:28:29.944248759 +0000 UTC m=+9706.629711864" Jan 26 17:28:34 crc kubenswrapper[4823]: I0126 17:28:34.508689 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:28:34 crc kubenswrapper[4823]: I0126 17:28:34.509282 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:28:35 crc kubenswrapper[4823]: I0126 17:28:35.463401 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:35 crc kubenswrapper[4823]: I0126 17:28:35.463680 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:35 crc kubenswrapper[4823]: I0126 17:28:35.569574 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:36 crc kubenswrapper[4823]: I0126 17:28:36.013797 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:36 crc kubenswrapper[4823]: I0126 17:28:36.065304 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sfw6r"] Jan 26 17:28:37 crc kubenswrapper[4823]: I0126 17:28:37.988691 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-sfw6r" podUID="96794203-e0fa-4f22-84df-6ae5fa16f6f3" containerName="registry-server" containerID="cri-o://8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1" gracePeriod=2 Jan 26 17:28:38 crc kubenswrapper[4823]: I0126 17:28:38.649212 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:38 crc kubenswrapper[4823]: I0126 17:28:38.812478 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk2tg\" (UniqueName: \"kubernetes.io/projected/96794203-e0fa-4f22-84df-6ae5fa16f6f3-kube-api-access-sk2tg\") pod \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\" (UID: \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\") " Jan 26 17:28:38 crc kubenswrapper[4823]: I0126 17:28:38.812580 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96794203-e0fa-4f22-84df-6ae5fa16f6f3-utilities\") pod \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\" (UID: \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\") " Jan 26 17:28:38 crc kubenswrapper[4823]: I0126 17:28:38.812807 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96794203-e0fa-4f22-84df-6ae5fa16f6f3-catalog-content\") pod \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\" (UID: \"96794203-e0fa-4f22-84df-6ae5fa16f6f3\") " Jan 26 17:28:38 crc kubenswrapper[4823]: I0126 17:28:38.813693 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96794203-e0fa-4f22-84df-6ae5fa16f6f3-utilities" (OuterVolumeSpecName: "utilities") pod "96794203-e0fa-4f22-84df-6ae5fa16f6f3" (UID: "96794203-e0fa-4f22-84df-6ae5fa16f6f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:28:38 crc kubenswrapper[4823]: I0126 17:28:38.818399 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96794203-e0fa-4f22-84df-6ae5fa16f6f3-kube-api-access-sk2tg" (OuterVolumeSpecName: "kube-api-access-sk2tg") pod "96794203-e0fa-4f22-84df-6ae5fa16f6f3" (UID: "96794203-e0fa-4f22-84df-6ae5fa16f6f3"). InnerVolumeSpecName "kube-api-access-sk2tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:28:38 crc kubenswrapper[4823]: I0126 17:28:38.830866 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk2tg\" (UniqueName: \"kubernetes.io/projected/96794203-e0fa-4f22-84df-6ae5fa16f6f3-kube-api-access-sk2tg\") on node \"crc\" DevicePath \"\"" Jan 26 17:28:38 crc kubenswrapper[4823]: I0126 17:28:38.830912 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96794203-e0fa-4f22-84df-6ae5fa16f6f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.001708 4823 generic.go:334] "Generic (PLEG): container finished" podID="96794203-e0fa-4f22-84df-6ae5fa16f6f3" containerID="8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1" exitCode=0 Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.001841 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-sfw6r" Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.001868 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfw6r" event={"ID":"96794203-e0fa-4f22-84df-6ae5fa16f6f3","Type":"ContainerDied","Data":"8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1"} Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.003712 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-sfw6r" event={"ID":"96794203-e0fa-4f22-84df-6ae5fa16f6f3","Type":"ContainerDied","Data":"cdffbd541ed2912afb97f56e3a10cbd949ded92401283995e7718ea76135738f"} Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.003794 4823 scope.go:117] "RemoveContainer" containerID="8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1" Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.037683 4823 scope.go:117] "RemoveContainer" containerID="8c74a1e02e010e75c99c1c164926ed672b776db8db067bb62a3a739cc19eb07a" Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.060711 4823 scope.go:117] "RemoveContainer" containerID="4ca62a1a5858cd56f6d803191f0339e7b2e3207b41ca1f595e260ed78f0ce3ec" Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.108183 4823 scope.go:117] "RemoveContainer" containerID="8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1" Jan 26 17:28:39 crc kubenswrapper[4823]: E0126 17:28:39.108734 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1\": container with ID starting with 8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1 not found: ID does not exist" containerID="8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1" Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.108855 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1"} err="failed to get container status \"8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1\": rpc error: code = NotFound desc = could not find container \"8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1\": container with ID starting with 8b86fbcd62852718af3e6ff29446ef98e211f77609c0aab4ebdd83bea8e24fc1 not found: ID does not exist" Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.108947 4823 scope.go:117] "RemoveContainer" containerID="8c74a1e02e010e75c99c1c164926ed672b776db8db067bb62a3a739cc19eb07a" Jan 26 17:28:39 crc kubenswrapper[4823]: E0126 17:28:39.109274 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c74a1e02e010e75c99c1c164926ed672b776db8db067bb62a3a739cc19eb07a\": container with ID starting with 8c74a1e02e010e75c99c1c164926ed672b776db8db067bb62a3a739cc19eb07a not found: ID does not exist" containerID="8c74a1e02e010e75c99c1c164926ed672b776db8db067bb62a3a739cc19eb07a" Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.109355 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c74a1e02e010e75c99c1c164926ed672b776db8db067bb62a3a739cc19eb07a"} err="failed to get container status \"8c74a1e02e010e75c99c1c164926ed672b776db8db067bb62a3a739cc19eb07a\": rpc error: code = NotFound desc = could not find container \"8c74a1e02e010e75c99c1c164926ed672b776db8db067bb62a3a739cc19eb07a\": container with ID starting with 8c74a1e02e010e75c99c1c164926ed672b776db8db067bb62a3a739cc19eb07a not found: ID does not exist" Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.109434 4823 scope.go:117] "RemoveContainer" containerID="4ca62a1a5858cd56f6d803191f0339e7b2e3207b41ca1f595e260ed78f0ce3ec" Jan 26 17:28:39 crc kubenswrapper[4823]: E0126 17:28:39.109678 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca62a1a5858cd56f6d803191f0339e7b2e3207b41ca1f595e260ed78f0ce3ec\": container with ID starting with 4ca62a1a5858cd56f6d803191f0339e7b2e3207b41ca1f595e260ed78f0ce3ec not found: ID does not exist" containerID="4ca62a1a5858cd56f6d803191f0339e7b2e3207b41ca1f595e260ed78f0ce3ec" Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.109762 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca62a1a5858cd56f6d803191f0339e7b2e3207b41ca1f595e260ed78f0ce3ec"} err="failed to get container status \"4ca62a1a5858cd56f6d803191f0339e7b2e3207b41ca1f595e260ed78f0ce3ec\": rpc error: code = NotFound desc = could not find container \"4ca62a1a5858cd56f6d803191f0339e7b2e3207b41ca1f595e260ed78f0ce3ec\": container with ID starting with 4ca62a1a5858cd56f6d803191f0339e7b2e3207b41ca1f595e260ed78f0ce3ec not found: ID does not exist" Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.959142 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96794203-e0fa-4f22-84df-6ae5fa16f6f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96794203-e0fa-4f22-84df-6ae5fa16f6f3" (UID: "96794203-e0fa-4f22-84df-6ae5fa16f6f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:28:39 crc kubenswrapper[4823]: I0126 17:28:39.988671 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96794203-e0fa-4f22-84df-6ae5fa16f6f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:28:40 crc kubenswrapper[4823]: I0126 17:28:40.241348 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-sfw6r"] Jan 26 17:28:40 crc kubenswrapper[4823]: I0126 17:28:40.255028 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-sfw6r"] Jan 26 17:28:41 crc kubenswrapper[4823]: I0126 17:28:41.577267 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96794203-e0fa-4f22-84df-6ae5fa16f6f3" path="/var/lib/kubelet/pods/96794203-e0fa-4f22-84df-6ae5fa16f6f3/volumes" Jan 26 17:29:04 crc kubenswrapper[4823]: I0126 17:29:04.508433 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:29:04 crc kubenswrapper[4823]: I0126 17:29:04.510275 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:29:34 crc kubenswrapper[4823]: I0126 17:29:34.508591 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:29:34 crc kubenswrapper[4823]: I0126 17:29:34.509123 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:29:34 crc kubenswrapper[4823]: I0126 17:29:34.509170 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 17:29:34 crc kubenswrapper[4823]: I0126 17:29:34.509904 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ee64b2317bf7c0af97daed171c60a01618577c5b2362fbb06dbc12f26a2e844"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:29:34 crc kubenswrapper[4823]: I0126 17:29:34.509965 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://3ee64b2317bf7c0af97daed171c60a01618577c5b2362fbb06dbc12f26a2e844" gracePeriod=600 Jan 26 17:29:35 crc kubenswrapper[4823]: I0126 17:29:35.518461 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="3ee64b2317bf7c0af97daed171c60a01618577c5b2362fbb06dbc12f26a2e844" exitCode=0 Jan 26 17:29:35 crc kubenswrapper[4823]: I0126 17:29:35.518568 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"3ee64b2317bf7c0af97daed171c60a01618577c5b2362fbb06dbc12f26a2e844"} Jan 26 17:29:35 crc kubenswrapper[4823]: I0126 17:29:35.518778 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07"} Jan 26 17:29:35 crc kubenswrapper[4823]: I0126 17:29:35.518812 4823 scope.go:117] "RemoveContainer" containerID="26c58e804d351c1d88447b659aedc1314e88fb1188b67c34a3f933e0cbded407" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.147916 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s"] Jan 26 17:30:00 crc kubenswrapper[4823]: E0126 17:30:00.148861 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96794203-e0fa-4f22-84df-6ae5fa16f6f3" containerName="extract-utilities" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.148880 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="96794203-e0fa-4f22-84df-6ae5fa16f6f3" containerName="extract-utilities" Jan 26 17:30:00 crc kubenswrapper[4823]: E0126 17:30:00.148907 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96794203-e0fa-4f22-84df-6ae5fa16f6f3" containerName="registry-server" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.148914 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="96794203-e0fa-4f22-84df-6ae5fa16f6f3" containerName="registry-server" Jan 26 17:30:00 crc kubenswrapper[4823]: E0126 17:30:00.148944 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96794203-e0fa-4f22-84df-6ae5fa16f6f3" containerName="extract-content" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.148952 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="96794203-e0fa-4f22-84df-6ae5fa16f6f3" containerName="extract-content" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.149135 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="96794203-e0fa-4f22-84df-6ae5fa16f6f3" containerName="registry-server" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.149912 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.154428 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.157984 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.162566 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s"] Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.315174 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4tkh\" (UniqueName: \"kubernetes.io/projected/1c3b5150-520b-46a4-91da-60a2a3bef04a-kube-api-access-s4tkh\") pod \"collect-profiles-29490810-vd54s\" (UID: \"1c3b5150-520b-46a4-91da-60a2a3bef04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.315303 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c3b5150-520b-46a4-91da-60a2a3bef04a-config-volume\") pod \"collect-profiles-29490810-vd54s\" (UID: \"1c3b5150-520b-46a4-91da-60a2a3bef04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.315341 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c3b5150-520b-46a4-91da-60a2a3bef04a-secret-volume\") pod \"collect-profiles-29490810-vd54s\" (UID: \"1c3b5150-520b-46a4-91da-60a2a3bef04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.417775 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c3b5150-520b-46a4-91da-60a2a3bef04a-config-volume\") pod \"collect-profiles-29490810-vd54s\" (UID: \"1c3b5150-520b-46a4-91da-60a2a3bef04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.418143 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c3b5150-520b-46a4-91da-60a2a3bef04a-secret-volume\") pod \"collect-profiles-29490810-vd54s\" (UID: \"1c3b5150-520b-46a4-91da-60a2a3bef04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.418311 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4tkh\" (UniqueName: \"kubernetes.io/projected/1c3b5150-520b-46a4-91da-60a2a3bef04a-kube-api-access-s4tkh\") pod \"collect-profiles-29490810-vd54s\" (UID: \"1c3b5150-520b-46a4-91da-60a2a3bef04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.419034 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c3b5150-520b-46a4-91da-60a2a3bef04a-config-volume\") pod \"collect-profiles-29490810-vd54s\" (UID: \"1c3b5150-520b-46a4-91da-60a2a3bef04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.434810 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c3b5150-520b-46a4-91da-60a2a3bef04a-secret-volume\") pod \"collect-profiles-29490810-vd54s\" (UID: \"1c3b5150-520b-46a4-91da-60a2a3bef04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.437334 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4tkh\" (UniqueName: \"kubernetes.io/projected/1c3b5150-520b-46a4-91da-60a2a3bef04a-kube-api-access-s4tkh\") pod \"collect-profiles-29490810-vd54s\" (UID: \"1c3b5150-520b-46a4-91da-60a2a3bef04a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.471662 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:00 crc kubenswrapper[4823]: I0126 17:30:00.970341 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s"] Jan 26 17:30:01 crc kubenswrapper[4823]: I0126 17:30:01.768312 4823 generic.go:334] "Generic (PLEG): container finished" podID="1c3b5150-520b-46a4-91da-60a2a3bef04a" containerID="2261d70242e5a077db23e066e1cf2bf35aaef5dd89e302c7f6bf9eb561ed7b4a" exitCode=0 Jan 26 17:30:01 crc kubenswrapper[4823]: I0126 17:30:01.768593 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" event={"ID":"1c3b5150-520b-46a4-91da-60a2a3bef04a","Type":"ContainerDied","Data":"2261d70242e5a077db23e066e1cf2bf35aaef5dd89e302c7f6bf9eb561ed7b4a"} Jan 26 17:30:01 crc kubenswrapper[4823]: I0126 17:30:01.768626 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" event={"ID":"1c3b5150-520b-46a4-91da-60a2a3bef04a","Type":"ContainerStarted","Data":"dc5fa081abfd5b1b7ea24943614194c4a57d8010cae5ee2aafa4b98b2ccc59d7"} Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.125685 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.192399 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c3b5150-520b-46a4-91da-60a2a3bef04a-secret-volume\") pod \"1c3b5150-520b-46a4-91da-60a2a3bef04a\" (UID: \"1c3b5150-520b-46a4-91da-60a2a3bef04a\") " Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.192507 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4tkh\" (UniqueName: \"kubernetes.io/projected/1c3b5150-520b-46a4-91da-60a2a3bef04a-kube-api-access-s4tkh\") pod \"1c3b5150-520b-46a4-91da-60a2a3bef04a\" (UID: \"1c3b5150-520b-46a4-91da-60a2a3bef04a\") " Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.192567 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c3b5150-520b-46a4-91da-60a2a3bef04a-config-volume\") pod \"1c3b5150-520b-46a4-91da-60a2a3bef04a\" (UID: \"1c3b5150-520b-46a4-91da-60a2a3bef04a\") " Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.193275 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c3b5150-520b-46a4-91da-60a2a3bef04a-config-volume" (OuterVolumeSpecName: "config-volume") pod "1c3b5150-520b-46a4-91da-60a2a3bef04a" (UID: "1c3b5150-520b-46a4-91da-60a2a3bef04a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.194790 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1c3b5150-520b-46a4-91da-60a2a3bef04a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.202604 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c3b5150-520b-46a4-91da-60a2a3bef04a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1c3b5150-520b-46a4-91da-60a2a3bef04a" (UID: "1c3b5150-520b-46a4-91da-60a2a3bef04a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.202701 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3b5150-520b-46a4-91da-60a2a3bef04a-kube-api-access-s4tkh" (OuterVolumeSpecName: "kube-api-access-s4tkh") pod "1c3b5150-520b-46a4-91da-60a2a3bef04a" (UID: "1c3b5150-520b-46a4-91da-60a2a3bef04a"). InnerVolumeSpecName "kube-api-access-s4tkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.296864 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1c3b5150-520b-46a4-91da-60a2a3bef04a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.296919 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4tkh\" (UniqueName: \"kubernetes.io/projected/1c3b5150-520b-46a4-91da-60a2a3bef04a-kube-api-access-s4tkh\") on node \"crc\" DevicePath \"\"" Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.789978 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" event={"ID":"1c3b5150-520b-46a4-91da-60a2a3bef04a","Type":"ContainerDied","Data":"dc5fa081abfd5b1b7ea24943614194c4a57d8010cae5ee2aafa4b98b2ccc59d7"} Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.790022 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc5fa081abfd5b1b7ea24943614194c4a57d8010cae5ee2aafa4b98b2ccc59d7" Jan 26 17:30:03 crc kubenswrapper[4823]: I0126 17:30:03.790061 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490810-vd54s" Jan 26 17:30:04 crc kubenswrapper[4823]: I0126 17:30:04.197091 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb"] Jan 26 17:30:04 crc kubenswrapper[4823]: I0126 17:30:04.207897 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490765-rxhzb"] Jan 26 17:30:05 crc kubenswrapper[4823]: I0126 17:30:05.570605 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c32be40a-1fd9-47a5-97e9-dcbec990f96f" path="/var/lib/kubelet/pods/c32be40a-1fd9-47a5-97e9-dcbec990f96f/volumes" Jan 26 17:30:10 crc kubenswrapper[4823]: I0126 17:30:10.513882 4823 scope.go:117] "RemoveContainer" containerID="662e2a237c0ee959abd0b34c1769febd6b23272096fa1814515671461f3ddbe4" Jan 26 17:31:34 crc kubenswrapper[4823]: I0126 17:31:34.508783 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:31:34 crc kubenswrapper[4823]: I0126 17:31:34.509292 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.463651 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-77msq"] Jan 26 17:31:57 crc kubenswrapper[4823]: E0126 17:31:57.464674 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3b5150-520b-46a4-91da-60a2a3bef04a" containerName="collect-profiles" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.464691 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3b5150-520b-46a4-91da-60a2a3bef04a" containerName="collect-profiles" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.464922 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3b5150-520b-46a4-91da-60a2a3bef04a" containerName="collect-profiles" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.466673 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77msq" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.473003 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77msq"] Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.591683 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7lfx\" (UniqueName: \"kubernetes.io/projected/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-kube-api-access-k7lfx\") pod \"community-operators-77msq\" (UID: \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\") " pod="openshift-marketplace/community-operators-77msq" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.591742 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-utilities\") pod \"community-operators-77msq\" (UID: \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\") " pod="openshift-marketplace/community-operators-77msq" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.591767 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-catalog-content\") pod \"community-operators-77msq\" (UID: \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\") " pod="openshift-marketplace/community-operators-77msq" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.694232 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7lfx\" (UniqueName: \"kubernetes.io/projected/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-kube-api-access-k7lfx\") pod \"community-operators-77msq\" (UID: \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\") " pod="openshift-marketplace/community-operators-77msq" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.694298 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-utilities\") pod \"community-operators-77msq\" (UID: \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\") " pod="openshift-marketplace/community-operators-77msq" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.694326 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-catalog-content\") pod \"community-operators-77msq\" (UID: \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\") " pod="openshift-marketplace/community-operators-77msq" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.694822 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-catalog-content\") pod \"community-operators-77msq\" (UID: \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\") " pod="openshift-marketplace/community-operators-77msq" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.694852 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-utilities\") pod \"community-operators-77msq\" (UID: \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\") " pod="openshift-marketplace/community-operators-77msq" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.716616 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7lfx\" (UniqueName: \"kubernetes.io/projected/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-kube-api-access-k7lfx\") pod \"community-operators-77msq\" (UID: \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\") " pod="openshift-marketplace/community-operators-77msq" Jan 26 17:31:57 crc kubenswrapper[4823]: I0126 17:31:57.794596 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77msq" Jan 26 17:31:58 crc kubenswrapper[4823]: I0126 17:31:58.322711 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-77msq"] Jan 26 17:31:58 crc kubenswrapper[4823]: I0126 17:31:58.821760 4823 generic.go:334] "Generic (PLEG): container finished" podID="4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" containerID="58609c7b9612dad41e9206c7bce1103525b3e9929be23e9dddb8a4136fd892c4" exitCode=0 Jan 26 17:31:58 crc kubenswrapper[4823]: I0126 17:31:58.821947 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77msq" event={"ID":"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab","Type":"ContainerDied","Data":"58609c7b9612dad41e9206c7bce1103525b3e9929be23e9dddb8a4136fd892c4"} Jan 26 17:31:58 crc kubenswrapper[4823]: I0126 17:31:58.822065 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77msq" event={"ID":"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab","Type":"ContainerStarted","Data":"ada250e4dacf4d4e53713bd43286860b61bdd97494750447352356ca3f0022d7"} Jan 26 17:32:00 crc kubenswrapper[4823]: I0126 17:32:00.842605 4823 generic.go:334] "Generic (PLEG): container finished" podID="4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" containerID="2bed0c4d71eab86cceb83a29075dd7dca2d7bb7ceac9d58c0db0aabc590a6732" exitCode=0 Jan 26 17:32:00 crc kubenswrapper[4823]: I0126 17:32:00.842695 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77msq" event={"ID":"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab","Type":"ContainerDied","Data":"2bed0c4d71eab86cceb83a29075dd7dca2d7bb7ceac9d58c0db0aabc590a6732"} Jan 26 17:32:01 crc kubenswrapper[4823]: I0126 17:32:01.852041 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77msq" event={"ID":"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab","Type":"ContainerStarted","Data":"39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93"} Jan 26 17:32:01 crc kubenswrapper[4823]: I0126 17:32:01.877253 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-77msq" podStartSLOduration=2.332907899 podStartE2EDuration="4.877229691s" podCreationTimestamp="2026-01-26 17:31:57 +0000 UTC" firstStartedPulling="2026-01-26 17:31:58.826343703 +0000 UTC m=+9915.511806808" lastFinishedPulling="2026-01-26 17:32:01.370665495 +0000 UTC m=+9918.056128600" observedRunningTime="2026-01-26 17:32:01.869086249 +0000 UTC m=+9918.554549354" watchObservedRunningTime="2026-01-26 17:32:01.877229691 +0000 UTC m=+9918.562692796" Jan 26 17:32:04 crc kubenswrapper[4823]: I0126 17:32:04.508240 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:32:04 crc kubenswrapper[4823]: I0126 17:32:04.508858 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:32:07 crc kubenswrapper[4823]: I0126 17:32:07.794833 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-77msq" Jan 26 17:32:07 crc kubenswrapper[4823]: I0126 17:32:07.795199 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-77msq" Jan 26 17:32:07 crc kubenswrapper[4823]: I0126 17:32:07.837365 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-77msq" Jan 26 17:32:07 crc kubenswrapper[4823]: I0126 17:32:07.954575 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-77msq" Jan 26 17:32:08 crc kubenswrapper[4823]: I0126 17:32:08.074281 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77msq"] Jan 26 17:32:09 crc kubenswrapper[4823]: I0126 17:32:09.925532 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-77msq" podUID="4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" containerName="registry-server" containerID="cri-o://39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93" gracePeriod=2 Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.402941 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77msq" Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.505700 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-utilities\") pod \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\" (UID: \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\") " Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.505909 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7lfx\" (UniqueName: \"kubernetes.io/projected/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-kube-api-access-k7lfx\") pod \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\" (UID: \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\") " Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.505952 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-catalog-content\") pod \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\" (UID: \"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab\") " Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.507013 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-utilities" (OuterVolumeSpecName: "utilities") pod "4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" (UID: "4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.511489 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-kube-api-access-k7lfx" (OuterVolumeSpecName: "kube-api-access-k7lfx") pod "4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" (UID: "4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab"). InnerVolumeSpecName "kube-api-access-k7lfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.553093 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" (UID: "4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.608899 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.608937 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7lfx\" (UniqueName: \"kubernetes.io/projected/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-kube-api-access-k7lfx\") on node \"crc\" DevicePath \"\"" Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.608957 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.936270 4823 generic.go:334] "Generic (PLEG): container finished" podID="4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" containerID="39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93" exitCode=0 Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.936351 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-77msq" Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.936378 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77msq" event={"ID":"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab","Type":"ContainerDied","Data":"39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93"} Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.937435 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-77msq" event={"ID":"4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab","Type":"ContainerDied","Data":"ada250e4dacf4d4e53713bd43286860b61bdd97494750447352356ca3f0022d7"} Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.937476 4823 scope.go:117] "RemoveContainer" containerID="39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93" Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.961571 4823 scope.go:117] "RemoveContainer" containerID="2bed0c4d71eab86cceb83a29075dd7dca2d7bb7ceac9d58c0db0aabc590a6732" Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.977125 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-77msq"] Jan 26 17:32:10 crc kubenswrapper[4823]: I0126 17:32:10.988129 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-77msq"] Jan 26 17:32:11 crc kubenswrapper[4823]: I0126 17:32:11.000987 4823 scope.go:117] "RemoveContainer" containerID="58609c7b9612dad41e9206c7bce1103525b3e9929be23e9dddb8a4136fd892c4" Jan 26 17:32:11 crc kubenswrapper[4823]: I0126 17:32:11.029174 4823 scope.go:117] "RemoveContainer" containerID="39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93" Jan 26 17:32:11 crc kubenswrapper[4823]: E0126 17:32:11.029762 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93\": container with ID starting with 39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93 not found: ID does not exist" containerID="39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93" Jan 26 17:32:11 crc kubenswrapper[4823]: I0126 17:32:11.029804 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93"} err="failed to get container status \"39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93\": rpc error: code = NotFound desc = could not find container \"39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93\": container with ID starting with 39d0c43ac64e1dfed8d8887025f788d1d56564c9da383faaa65cbcada9462c93 not found: ID does not exist" Jan 26 17:32:11 crc kubenswrapper[4823]: I0126 17:32:11.029832 4823 scope.go:117] "RemoveContainer" containerID="2bed0c4d71eab86cceb83a29075dd7dca2d7bb7ceac9d58c0db0aabc590a6732" Jan 26 17:32:11 crc kubenswrapper[4823]: E0126 17:32:11.030216 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2bed0c4d71eab86cceb83a29075dd7dca2d7bb7ceac9d58c0db0aabc590a6732\": container with ID starting with 2bed0c4d71eab86cceb83a29075dd7dca2d7bb7ceac9d58c0db0aabc590a6732 not found: ID does not exist" containerID="2bed0c4d71eab86cceb83a29075dd7dca2d7bb7ceac9d58c0db0aabc590a6732" Jan 26 17:32:11 crc kubenswrapper[4823]: I0126 17:32:11.030253 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2bed0c4d71eab86cceb83a29075dd7dca2d7bb7ceac9d58c0db0aabc590a6732"} err="failed to get container status \"2bed0c4d71eab86cceb83a29075dd7dca2d7bb7ceac9d58c0db0aabc590a6732\": rpc error: code = NotFound desc = could not find container \"2bed0c4d71eab86cceb83a29075dd7dca2d7bb7ceac9d58c0db0aabc590a6732\": container with ID starting with 2bed0c4d71eab86cceb83a29075dd7dca2d7bb7ceac9d58c0db0aabc590a6732 not found: ID does not exist" Jan 26 17:32:11 crc kubenswrapper[4823]: I0126 17:32:11.030276 4823 scope.go:117] "RemoveContainer" containerID="58609c7b9612dad41e9206c7bce1103525b3e9929be23e9dddb8a4136fd892c4" Jan 26 17:32:11 crc kubenswrapper[4823]: E0126 17:32:11.030733 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58609c7b9612dad41e9206c7bce1103525b3e9929be23e9dddb8a4136fd892c4\": container with ID starting with 58609c7b9612dad41e9206c7bce1103525b3e9929be23e9dddb8a4136fd892c4 not found: ID does not exist" containerID="58609c7b9612dad41e9206c7bce1103525b3e9929be23e9dddb8a4136fd892c4" Jan 26 17:32:11 crc kubenswrapper[4823]: I0126 17:32:11.030843 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58609c7b9612dad41e9206c7bce1103525b3e9929be23e9dddb8a4136fd892c4"} err="failed to get container status \"58609c7b9612dad41e9206c7bce1103525b3e9929be23e9dddb8a4136fd892c4\": rpc error: code = NotFound desc = could not find container \"58609c7b9612dad41e9206c7bce1103525b3e9929be23e9dddb8a4136fd892c4\": container with ID starting with 58609c7b9612dad41e9206c7bce1103525b3e9929be23e9dddb8a4136fd892c4 not found: ID does not exist" Jan 26 17:32:11 crc kubenswrapper[4823]: I0126 17:32:11.573582 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" path="/var/lib/kubelet/pods/4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab/volumes" Jan 26 17:32:34 crc kubenswrapper[4823]: I0126 17:32:34.508806 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:32:34 crc kubenswrapper[4823]: I0126 17:32:34.509483 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:32:34 crc kubenswrapper[4823]: I0126 17:32:34.509549 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 17:32:34 crc kubenswrapper[4823]: I0126 17:32:34.510508 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:32:34 crc kubenswrapper[4823]: I0126 17:32:34.510587 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" gracePeriod=600 Jan 26 17:32:34 crc kubenswrapper[4823]: E0126 17:32:34.669596 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:32:35 crc kubenswrapper[4823]: I0126 17:32:35.168588 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" exitCode=0 Jan 26 17:32:35 crc kubenswrapper[4823]: I0126 17:32:35.168595 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07"} Jan 26 17:32:35 crc kubenswrapper[4823]: I0126 17:32:35.168663 4823 scope.go:117] "RemoveContainer" containerID="3ee64b2317bf7c0af97daed171c60a01618577c5b2362fbb06dbc12f26a2e844" Jan 26 17:32:35 crc kubenswrapper[4823]: I0126 17:32:35.169462 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:32:35 crc kubenswrapper[4823]: E0126 17:32:35.169810 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:32:46 crc kubenswrapper[4823]: I0126 17:32:46.560997 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:32:46 crc kubenswrapper[4823]: E0126 17:32:46.561997 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:33:00 crc kubenswrapper[4823]: I0126 17:33:00.561269 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:33:00 crc kubenswrapper[4823]: E0126 17:33:00.562099 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:33:14 crc kubenswrapper[4823]: I0126 17:33:14.560893 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:33:14 crc kubenswrapper[4823]: E0126 17:33:14.561688 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:33:25 crc kubenswrapper[4823]: I0126 17:33:25.560579 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:33:25 crc kubenswrapper[4823]: E0126 17:33:25.561738 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.382441 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4km94"] Jan 26 17:33:31 crc kubenswrapper[4823]: E0126 17:33:31.384010 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" containerName="registry-server" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.384025 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" containerName="registry-server" Jan 26 17:33:31 crc kubenswrapper[4823]: E0126 17:33:31.384040 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" containerName="extract-content" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.384046 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" containerName="extract-content" Jan 26 17:33:31 crc kubenswrapper[4823]: E0126 17:33:31.384058 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" containerName="extract-utilities" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.384064 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" containerName="extract-utilities" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.384261 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dbab5ec-dfaa-4f3a-8d9c-ddb3e3da7bab" containerName="registry-server" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.385844 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.400055 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4km94"] Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.504566 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-catalog-content\") pod \"redhat-marketplace-4km94\" (UID: \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\") " pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.504665 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-utilities\") pod \"redhat-marketplace-4km94\" (UID: \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\") " pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.504691 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtkjx\" (UniqueName: \"kubernetes.io/projected/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-kube-api-access-jtkjx\") pod \"redhat-marketplace-4km94\" (UID: \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\") " pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.607095 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-utilities\") pod \"redhat-marketplace-4km94\" (UID: \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\") " pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.607432 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtkjx\" (UniqueName: \"kubernetes.io/projected/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-kube-api-access-jtkjx\") pod \"redhat-marketplace-4km94\" (UID: \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\") " pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.607673 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-utilities\") pod \"redhat-marketplace-4km94\" (UID: \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\") " pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.608901 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-catalog-content\") pod \"redhat-marketplace-4km94\" (UID: \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\") " pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.609333 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-catalog-content\") pod \"redhat-marketplace-4km94\" (UID: \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\") " pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.633349 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtkjx\" (UniqueName: \"kubernetes.io/projected/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-kube-api-access-jtkjx\") pod \"redhat-marketplace-4km94\" (UID: \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\") " pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:31 crc kubenswrapper[4823]: I0126 17:33:31.711824 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:32 crc kubenswrapper[4823]: I0126 17:33:32.189889 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4km94"] Jan 26 17:33:32 crc kubenswrapper[4823]: I0126 17:33:32.668677 4823 generic.go:334] "Generic (PLEG): container finished" podID="bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" containerID="f53b072fa5a97defb223fce263e0355b418a972212ec6d70244607cc98a91582" exitCode=0 Jan 26 17:33:32 crc kubenswrapper[4823]: I0126 17:33:32.669413 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4km94" event={"ID":"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5","Type":"ContainerDied","Data":"f53b072fa5a97defb223fce263e0355b418a972212ec6d70244607cc98a91582"} Jan 26 17:33:32 crc kubenswrapper[4823]: I0126 17:33:32.670170 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4km94" event={"ID":"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5","Type":"ContainerStarted","Data":"849251b1e95a0217898c83ea2cb6e8061c01eb861789e397cdbaa1f726ee73bf"} Jan 26 17:33:32 crc kubenswrapper[4823]: I0126 17:33:32.670687 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:33:33 crc kubenswrapper[4823]: I0126 17:33:33.681338 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4km94" event={"ID":"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5","Type":"ContainerStarted","Data":"430a7fd73f320e68992b34d7e5bcab43edac0a36fec615fac7de04c4eece5a23"} Jan 26 17:33:34 crc kubenswrapper[4823]: I0126 17:33:34.691799 4823 generic.go:334] "Generic (PLEG): container finished" podID="bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" containerID="430a7fd73f320e68992b34d7e5bcab43edac0a36fec615fac7de04c4eece5a23" exitCode=0 Jan 26 17:33:34 crc kubenswrapper[4823]: I0126 17:33:34.691871 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4km94" event={"ID":"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5","Type":"ContainerDied","Data":"430a7fd73f320e68992b34d7e5bcab43edac0a36fec615fac7de04c4eece5a23"} Jan 26 17:33:35 crc kubenswrapper[4823]: I0126 17:33:35.707422 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4km94" event={"ID":"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5","Type":"ContainerStarted","Data":"c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9"} Jan 26 17:33:35 crc kubenswrapper[4823]: I0126 17:33:35.734232 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4km94" podStartSLOduration=2.197823974 podStartE2EDuration="4.73421562s" podCreationTimestamp="2026-01-26 17:33:31 +0000 UTC" firstStartedPulling="2026-01-26 17:33:32.6704114 +0000 UTC m=+10009.355874505" lastFinishedPulling="2026-01-26 17:33:35.206803046 +0000 UTC m=+10011.892266151" observedRunningTime="2026-01-26 17:33:35.724502855 +0000 UTC m=+10012.409965970" watchObservedRunningTime="2026-01-26 17:33:35.73421562 +0000 UTC m=+10012.419678725" Jan 26 17:33:38 crc kubenswrapper[4823]: I0126 17:33:38.561521 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:33:38 crc kubenswrapper[4823]: E0126 17:33:38.562319 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:33:41 crc kubenswrapper[4823]: I0126 17:33:41.712643 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:41 crc kubenswrapper[4823]: I0126 17:33:41.713022 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:41 crc kubenswrapper[4823]: I0126 17:33:41.770094 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:41 crc kubenswrapper[4823]: I0126 17:33:41.828625 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:42 crc kubenswrapper[4823]: I0126 17:33:42.011546 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4km94"] Jan 26 17:33:43 crc kubenswrapper[4823]: I0126 17:33:43.797559 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4km94" podUID="bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" containerName="registry-server" containerID="cri-o://c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9" gracePeriod=2 Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.220473 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.280431 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-catalog-content\") pod \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\" (UID: \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\") " Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.280522 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtkjx\" (UniqueName: \"kubernetes.io/projected/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-kube-api-access-jtkjx\") pod \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\" (UID: \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\") " Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.280616 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-utilities\") pod \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\" (UID: \"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5\") " Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.281792 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-utilities" (OuterVolumeSpecName: "utilities") pod "bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" (UID: "bb6e10a7-8981-4edc-b289-e8a02bc0dbe5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.286919 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-kube-api-access-jtkjx" (OuterVolumeSpecName: "kube-api-access-jtkjx") pod "bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" (UID: "bb6e10a7-8981-4edc-b289-e8a02bc0dbe5"). InnerVolumeSpecName "kube-api-access-jtkjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.303389 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" (UID: "bb6e10a7-8981-4edc-b289-e8a02bc0dbe5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.382931 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.382966 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtkjx\" (UniqueName: \"kubernetes.io/projected/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-kube-api-access-jtkjx\") on node \"crc\" DevicePath \"\"" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.382981 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.807463 4823 generic.go:334] "Generic (PLEG): container finished" podID="bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" containerID="c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9" exitCode=0 Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.807505 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4km94" event={"ID":"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5","Type":"ContainerDied","Data":"c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9"} Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.807534 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4km94" event={"ID":"bb6e10a7-8981-4edc-b289-e8a02bc0dbe5","Type":"ContainerDied","Data":"849251b1e95a0217898c83ea2cb6e8061c01eb861789e397cdbaa1f726ee73bf"} Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.807550 4823 scope.go:117] "RemoveContainer" containerID="c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.807559 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4km94" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.828095 4823 scope.go:117] "RemoveContainer" containerID="430a7fd73f320e68992b34d7e5bcab43edac0a36fec615fac7de04c4eece5a23" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.846414 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4km94"] Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.856466 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4km94"] Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.876160 4823 scope.go:117] "RemoveContainer" containerID="f53b072fa5a97defb223fce263e0355b418a972212ec6d70244607cc98a91582" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.909816 4823 scope.go:117] "RemoveContainer" containerID="c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9" Jan 26 17:33:44 crc kubenswrapper[4823]: E0126 17:33:44.910344 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9\": container with ID starting with c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9 not found: ID does not exist" containerID="c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.910402 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9"} err="failed to get container status \"c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9\": rpc error: code = NotFound desc = could not find container \"c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9\": container with ID starting with c84de6a5d407e5874efb8ed27b6c2ece95ecc992adb99a8f58d520ea642188f9 not found: ID does not exist" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.910433 4823 scope.go:117] "RemoveContainer" containerID="430a7fd73f320e68992b34d7e5bcab43edac0a36fec615fac7de04c4eece5a23" Jan 26 17:33:44 crc kubenswrapper[4823]: E0126 17:33:44.910710 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430a7fd73f320e68992b34d7e5bcab43edac0a36fec615fac7de04c4eece5a23\": container with ID starting with 430a7fd73f320e68992b34d7e5bcab43edac0a36fec615fac7de04c4eece5a23 not found: ID does not exist" containerID="430a7fd73f320e68992b34d7e5bcab43edac0a36fec615fac7de04c4eece5a23" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.910739 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430a7fd73f320e68992b34d7e5bcab43edac0a36fec615fac7de04c4eece5a23"} err="failed to get container status \"430a7fd73f320e68992b34d7e5bcab43edac0a36fec615fac7de04c4eece5a23\": rpc error: code = NotFound desc = could not find container \"430a7fd73f320e68992b34d7e5bcab43edac0a36fec615fac7de04c4eece5a23\": container with ID starting with 430a7fd73f320e68992b34d7e5bcab43edac0a36fec615fac7de04c4eece5a23 not found: ID does not exist" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.910757 4823 scope.go:117] "RemoveContainer" containerID="f53b072fa5a97defb223fce263e0355b418a972212ec6d70244607cc98a91582" Jan 26 17:33:44 crc kubenswrapper[4823]: E0126 17:33:44.911002 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f53b072fa5a97defb223fce263e0355b418a972212ec6d70244607cc98a91582\": container with ID starting with f53b072fa5a97defb223fce263e0355b418a972212ec6d70244607cc98a91582 not found: ID does not exist" containerID="f53b072fa5a97defb223fce263e0355b418a972212ec6d70244607cc98a91582" Jan 26 17:33:44 crc kubenswrapper[4823]: I0126 17:33:44.911035 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f53b072fa5a97defb223fce263e0355b418a972212ec6d70244607cc98a91582"} err="failed to get container status \"f53b072fa5a97defb223fce263e0355b418a972212ec6d70244607cc98a91582\": rpc error: code = NotFound desc = could not find container \"f53b072fa5a97defb223fce263e0355b418a972212ec6d70244607cc98a91582\": container with ID starting with f53b072fa5a97defb223fce263e0355b418a972212ec6d70244607cc98a91582 not found: ID does not exist" Jan 26 17:33:45 crc kubenswrapper[4823]: I0126 17:33:45.572239 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" path="/var/lib/kubelet/pods/bb6e10a7-8981-4edc-b289-e8a02bc0dbe5/volumes" Jan 26 17:33:53 crc kubenswrapper[4823]: I0126 17:33:53.567157 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:33:53 crc kubenswrapper[4823]: E0126 17:33:53.569213 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:34:04 crc kubenswrapper[4823]: I0126 17:34:04.561351 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:34:04 crc kubenswrapper[4823]: E0126 17:34:04.562291 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:34:17 crc kubenswrapper[4823]: I0126 17:34:17.561652 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:34:17 crc kubenswrapper[4823]: E0126 17:34:17.563190 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:34:32 crc kubenswrapper[4823]: I0126 17:34:32.560942 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:34:32 crc kubenswrapper[4823]: E0126 17:34:32.562128 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:34:44 crc kubenswrapper[4823]: I0126 17:34:44.561060 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:34:44 crc kubenswrapper[4823]: E0126 17:34:44.561942 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:34:57 crc kubenswrapper[4823]: I0126 17:34:57.561106 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:34:57 crc kubenswrapper[4823]: E0126 17:34:57.561854 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:35:12 crc kubenswrapper[4823]: I0126 17:35:12.561056 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:35:12 crc kubenswrapper[4823]: E0126 17:35:12.562957 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:35:26 crc kubenswrapper[4823]: I0126 17:35:26.560621 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:35:26 crc kubenswrapper[4823]: E0126 17:35:26.561405 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:35:39 crc kubenswrapper[4823]: I0126 17:35:39.560425 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:35:39 crc kubenswrapper[4823]: E0126 17:35:39.562222 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:35:52 crc kubenswrapper[4823]: I0126 17:35:52.560516 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:35:52 crc kubenswrapper[4823]: E0126 17:35:52.561429 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:36:07 crc kubenswrapper[4823]: I0126 17:36:07.560971 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:36:07 crc kubenswrapper[4823]: E0126 17:36:07.561653 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:36:19 crc kubenswrapper[4823]: I0126 17:36:19.560679 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:36:19 crc kubenswrapper[4823]: E0126 17:36:19.561663 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:36:30 crc kubenswrapper[4823]: I0126 17:36:30.560525 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:36:30 crc kubenswrapper[4823]: E0126 17:36:30.561665 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:36:43 crc kubenswrapper[4823]: I0126 17:36:43.566217 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:36:43 crc kubenswrapper[4823]: E0126 17:36:43.568545 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:36:47 crc kubenswrapper[4823]: I0126 17:36:47.853661 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w77nl"] Jan 26 17:36:47 crc kubenswrapper[4823]: E0126 17:36:47.855181 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" containerName="extract-content" Jan 26 17:36:47 crc kubenswrapper[4823]: I0126 17:36:47.855224 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" containerName="extract-content" Jan 26 17:36:47 crc kubenswrapper[4823]: E0126 17:36:47.855254 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" containerName="extract-utilities" Jan 26 17:36:47 crc kubenswrapper[4823]: I0126 17:36:47.855264 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" containerName="extract-utilities" Jan 26 17:36:47 crc kubenswrapper[4823]: E0126 17:36:47.855290 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" containerName="registry-server" Jan 26 17:36:47 crc kubenswrapper[4823]: I0126 17:36:47.855298 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" containerName="registry-server" Jan 26 17:36:47 crc kubenswrapper[4823]: I0126 17:36:47.855559 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb6e10a7-8981-4edc-b289-e8a02bc0dbe5" containerName="registry-server" Jan 26 17:36:47 crc kubenswrapper[4823]: I0126 17:36:47.857408 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:47 crc kubenswrapper[4823]: I0126 17:36:47.864351 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w77nl"] Jan 26 17:36:48 crc kubenswrapper[4823]: I0126 17:36:48.005348 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-utilities\") pod \"certified-operators-w77nl\" (UID: \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\") " pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:48 crc kubenswrapper[4823]: I0126 17:36:48.005694 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vthpz\" (UniqueName: \"kubernetes.io/projected/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-kube-api-access-vthpz\") pod \"certified-operators-w77nl\" (UID: \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\") " pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:48 crc kubenswrapper[4823]: I0126 17:36:48.005852 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-catalog-content\") pod \"certified-operators-w77nl\" (UID: \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\") " pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:48 crc kubenswrapper[4823]: I0126 17:36:48.107953 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-utilities\") pod \"certified-operators-w77nl\" (UID: \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\") " pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:48 crc kubenswrapper[4823]: I0126 17:36:48.108020 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vthpz\" (UniqueName: \"kubernetes.io/projected/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-kube-api-access-vthpz\") pod \"certified-operators-w77nl\" (UID: \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\") " pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:48 crc kubenswrapper[4823]: I0126 17:36:48.108077 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-catalog-content\") pod \"certified-operators-w77nl\" (UID: \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\") " pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:48 crc kubenswrapper[4823]: I0126 17:36:48.108480 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-utilities\") pod \"certified-operators-w77nl\" (UID: \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\") " pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:48 crc kubenswrapper[4823]: I0126 17:36:48.108623 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-catalog-content\") pod \"certified-operators-w77nl\" (UID: \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\") " pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:48 crc kubenswrapper[4823]: I0126 17:36:48.130628 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vthpz\" (UniqueName: \"kubernetes.io/projected/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-kube-api-access-vthpz\") pod \"certified-operators-w77nl\" (UID: \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\") " pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:48 crc kubenswrapper[4823]: I0126 17:36:48.175193 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:48 crc kubenswrapper[4823]: I0126 17:36:48.720513 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w77nl"] Jan 26 17:36:49 crc kubenswrapper[4823]: I0126 17:36:49.477301 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w77nl" event={"ID":"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284","Type":"ContainerStarted","Data":"fb10a3bc02d8ad0ecd02b279ed6497c0fc754cc7341d1c22f8c12d112c789af4"} Jan 26 17:36:49 crc kubenswrapper[4823]: I0126 17:36:49.477634 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w77nl" event={"ID":"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284","Type":"ContainerStarted","Data":"e125051ae7aed39bf896a53e35265074a7371369a2358cffe69d41a139a44b34"} Jan 26 17:36:50 crc kubenswrapper[4823]: I0126 17:36:50.489613 4823 generic.go:334] "Generic (PLEG): container finished" podID="70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" containerID="fb10a3bc02d8ad0ecd02b279ed6497c0fc754cc7341d1c22f8c12d112c789af4" exitCode=0 Jan 26 17:36:50 crc kubenswrapper[4823]: I0126 17:36:50.489666 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w77nl" event={"ID":"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284","Type":"ContainerDied","Data":"fb10a3bc02d8ad0ecd02b279ed6497c0fc754cc7341d1c22f8c12d112c789af4"} Jan 26 17:36:52 crc kubenswrapper[4823]: I0126 17:36:52.511612 4823 generic.go:334] "Generic (PLEG): container finished" podID="70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" containerID="fdf59de28077ef771d253d9cc796ac693292cf3682175fc38a69211e0c35bcc0" exitCode=0 Jan 26 17:36:52 crc kubenswrapper[4823]: I0126 17:36:52.511684 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w77nl" event={"ID":"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284","Type":"ContainerDied","Data":"fdf59de28077ef771d253d9cc796ac693292cf3682175fc38a69211e0c35bcc0"} Jan 26 17:36:54 crc kubenswrapper[4823]: I0126 17:36:54.536071 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w77nl" event={"ID":"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284","Type":"ContainerStarted","Data":"c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9"} Jan 26 17:36:54 crc kubenswrapper[4823]: I0126 17:36:54.564389 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w77nl" podStartSLOduration=4.68010689 podStartE2EDuration="7.564344245s" podCreationTimestamp="2026-01-26 17:36:47 +0000 UTC" firstStartedPulling="2026-01-26 17:36:50.492220726 +0000 UTC m=+10207.177683851" lastFinishedPulling="2026-01-26 17:36:53.376458101 +0000 UTC m=+10210.061921206" observedRunningTime="2026-01-26 17:36:54.554390975 +0000 UTC m=+10211.239854100" watchObservedRunningTime="2026-01-26 17:36:54.564344245 +0000 UTC m=+10211.249807360" Jan 26 17:36:56 crc kubenswrapper[4823]: I0126 17:36:56.560260 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:36:56 crc kubenswrapper[4823]: E0126 17:36:56.560710 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:36:58 crc kubenswrapper[4823]: I0126 17:36:58.175532 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:58 crc kubenswrapper[4823]: I0126 17:36:58.176089 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:58 crc kubenswrapper[4823]: I0126 17:36:58.222056 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:58 crc kubenswrapper[4823]: I0126 17:36:58.610559 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:36:58 crc kubenswrapper[4823]: I0126 17:36:58.675225 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w77nl"] Jan 26 17:37:00 crc kubenswrapper[4823]: I0126 17:37:00.592105 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w77nl" podUID="70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" containerName="registry-server" containerID="cri-o://c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9" gracePeriod=2 Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.149135 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.168313 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vthpz\" (UniqueName: \"kubernetes.io/projected/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-kube-api-access-vthpz\") pod \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\" (UID: \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\") " Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.169954 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-utilities" (OuterVolumeSpecName: "utilities") pod "70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" (UID: "70a8fe04-e9b6-41cb-8f82-3c0fdc28f284"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.173514 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-utilities\") pod \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\" (UID: \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\") " Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.173684 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-catalog-content\") pod \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\" (UID: \"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284\") " Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.174653 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.176896 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-kube-api-access-vthpz" (OuterVolumeSpecName: "kube-api-access-vthpz") pod "70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" (UID: "70a8fe04-e9b6-41cb-8f82-3c0fdc28f284"). InnerVolumeSpecName "kube-api-access-vthpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.246586 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" (UID: "70a8fe04-e9b6-41cb-8f82-3c0fdc28f284"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.276148 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vthpz\" (UniqueName: \"kubernetes.io/projected/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-kube-api-access-vthpz\") on node \"crc\" DevicePath \"\"" Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.276188 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.601751 4823 generic.go:334] "Generic (PLEG): container finished" podID="70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" containerID="c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9" exitCode=0 Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.601796 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w77nl" Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.601811 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w77nl" event={"ID":"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284","Type":"ContainerDied","Data":"c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9"} Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.602219 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w77nl" event={"ID":"70a8fe04-e9b6-41cb-8f82-3c0fdc28f284","Type":"ContainerDied","Data":"e125051ae7aed39bf896a53e35265074a7371369a2358cffe69d41a139a44b34"} Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.602238 4823 scope.go:117] "RemoveContainer" containerID="c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9" Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.624993 4823 scope.go:117] "RemoveContainer" containerID="fdf59de28077ef771d253d9cc796ac693292cf3682175fc38a69211e0c35bcc0" Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.638823 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w77nl"] Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.649284 4823 scope.go:117] "RemoveContainer" containerID="fb10a3bc02d8ad0ecd02b279ed6497c0fc754cc7341d1c22f8c12d112c789af4" Jan 26 17:37:01 crc kubenswrapper[4823]: I0126 17:37:01.656613 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w77nl"] Jan 26 17:37:02 crc kubenswrapper[4823]: I0126 17:37:02.291041 4823 scope.go:117] "RemoveContainer" containerID="c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9" Jan 26 17:37:02 crc kubenswrapper[4823]: E0126 17:37:02.291517 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9\": container with ID starting with c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9 not found: ID does not exist" containerID="c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9" Jan 26 17:37:02 crc kubenswrapper[4823]: I0126 17:37:02.291569 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9"} err="failed to get container status \"c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9\": rpc error: code = NotFound desc = could not find container \"c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9\": container with ID starting with c0cf7759dd744d90cc46f6ae6288c9b77e7c3ad28dd923d29b5c1fc1070b24e9 not found: ID does not exist" Jan 26 17:37:02 crc kubenswrapper[4823]: I0126 17:37:02.291597 4823 scope.go:117] "RemoveContainer" containerID="fdf59de28077ef771d253d9cc796ac693292cf3682175fc38a69211e0c35bcc0" Jan 26 17:37:02 crc kubenswrapper[4823]: E0126 17:37:02.291942 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdf59de28077ef771d253d9cc796ac693292cf3682175fc38a69211e0c35bcc0\": container with ID starting with fdf59de28077ef771d253d9cc796ac693292cf3682175fc38a69211e0c35bcc0 not found: ID does not exist" containerID="fdf59de28077ef771d253d9cc796ac693292cf3682175fc38a69211e0c35bcc0" Jan 26 17:37:02 crc kubenswrapper[4823]: I0126 17:37:02.292000 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf59de28077ef771d253d9cc796ac693292cf3682175fc38a69211e0c35bcc0"} err="failed to get container status \"fdf59de28077ef771d253d9cc796ac693292cf3682175fc38a69211e0c35bcc0\": rpc error: code = NotFound desc = could not find container \"fdf59de28077ef771d253d9cc796ac693292cf3682175fc38a69211e0c35bcc0\": container with ID starting with fdf59de28077ef771d253d9cc796ac693292cf3682175fc38a69211e0c35bcc0 not found: ID does not exist" Jan 26 17:37:02 crc kubenswrapper[4823]: I0126 17:37:02.292031 4823 scope.go:117] "RemoveContainer" containerID="fb10a3bc02d8ad0ecd02b279ed6497c0fc754cc7341d1c22f8c12d112c789af4" Jan 26 17:37:02 crc kubenswrapper[4823]: E0126 17:37:02.292372 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb10a3bc02d8ad0ecd02b279ed6497c0fc754cc7341d1c22f8c12d112c789af4\": container with ID starting with fb10a3bc02d8ad0ecd02b279ed6497c0fc754cc7341d1c22f8c12d112c789af4 not found: ID does not exist" containerID="fb10a3bc02d8ad0ecd02b279ed6497c0fc754cc7341d1c22f8c12d112c789af4" Jan 26 17:37:02 crc kubenswrapper[4823]: I0126 17:37:02.292403 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb10a3bc02d8ad0ecd02b279ed6497c0fc754cc7341d1c22f8c12d112c789af4"} err="failed to get container status \"fb10a3bc02d8ad0ecd02b279ed6497c0fc754cc7341d1c22f8c12d112c789af4\": rpc error: code = NotFound desc = could not find container \"fb10a3bc02d8ad0ecd02b279ed6497c0fc754cc7341d1c22f8c12d112c789af4\": container with ID starting with fb10a3bc02d8ad0ecd02b279ed6497c0fc754cc7341d1c22f8c12d112c789af4 not found: ID does not exist" Jan 26 17:37:03 crc kubenswrapper[4823]: I0126 17:37:03.571647 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" path="/var/lib/kubelet/pods/70a8fe04-e9b6-41cb-8f82-3c0fdc28f284/volumes" Jan 26 17:37:07 crc kubenswrapper[4823]: I0126 17:37:07.560848 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:37:07 crc kubenswrapper[4823]: E0126 17:37:07.561657 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:37:20 crc kubenswrapper[4823]: I0126 17:37:20.561760 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:37:20 crc kubenswrapper[4823]: E0126 17:37:20.563227 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:37:32 crc kubenswrapper[4823]: I0126 17:37:32.560753 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:37:32 crc kubenswrapper[4823]: E0126 17:37:32.561747 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:37:46 crc kubenswrapper[4823]: I0126 17:37:46.561932 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:37:47 crc kubenswrapper[4823]: I0126 17:37:47.020794 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"cd47b5961f1639f406deee6ff4351184e482b9fc6d4c8ae8785fbba7b03021ea"} Jan 26 17:38:53 crc kubenswrapper[4823]: I0126 17:38:53.829643 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7lgdb"] Jan 26 17:38:53 crc kubenswrapper[4823]: E0126 17:38:53.830681 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" containerName="extract-utilities" Jan 26 17:38:53 crc kubenswrapper[4823]: I0126 17:38:53.830698 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" containerName="extract-utilities" Jan 26 17:38:53 crc kubenswrapper[4823]: E0126 17:38:53.830725 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" containerName="registry-server" Jan 26 17:38:53 crc kubenswrapper[4823]: I0126 17:38:53.830734 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" containerName="registry-server" Jan 26 17:38:53 crc kubenswrapper[4823]: E0126 17:38:53.830751 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" containerName="extract-content" Jan 26 17:38:53 crc kubenswrapper[4823]: I0126 17:38:53.830760 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" containerName="extract-content" Jan 26 17:38:53 crc kubenswrapper[4823]: I0126 17:38:53.830975 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="70a8fe04-e9b6-41cb-8f82-3c0fdc28f284" containerName="registry-server" Jan 26 17:38:53 crc kubenswrapper[4823]: I0126 17:38:53.833044 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:38:53 crc kubenswrapper[4823]: I0126 17:38:53.845692 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lgdb"] Jan 26 17:38:54 crc kubenswrapper[4823]: I0126 17:38:54.014626 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grtgc\" (UniqueName: \"kubernetes.io/projected/afd51e01-d6ed-4395-9aa2-211976da81a0-kube-api-access-grtgc\") pod \"redhat-operators-7lgdb\" (UID: \"afd51e01-d6ed-4395-9aa2-211976da81a0\") " pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:38:54 crc kubenswrapper[4823]: I0126 17:38:54.015457 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd51e01-d6ed-4395-9aa2-211976da81a0-utilities\") pod \"redhat-operators-7lgdb\" (UID: \"afd51e01-d6ed-4395-9aa2-211976da81a0\") " pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:38:54 crc kubenswrapper[4823]: I0126 17:38:54.015674 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd51e01-d6ed-4395-9aa2-211976da81a0-catalog-content\") pod \"redhat-operators-7lgdb\" (UID: \"afd51e01-d6ed-4395-9aa2-211976da81a0\") " pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:38:54 crc kubenswrapper[4823]: I0126 17:38:54.118133 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grtgc\" (UniqueName: \"kubernetes.io/projected/afd51e01-d6ed-4395-9aa2-211976da81a0-kube-api-access-grtgc\") pod \"redhat-operators-7lgdb\" (UID: \"afd51e01-d6ed-4395-9aa2-211976da81a0\") " pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:38:54 crc kubenswrapper[4823]: I0126 17:38:54.118248 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd51e01-d6ed-4395-9aa2-211976da81a0-utilities\") pod \"redhat-operators-7lgdb\" (UID: \"afd51e01-d6ed-4395-9aa2-211976da81a0\") " pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:38:54 crc kubenswrapper[4823]: I0126 17:38:54.118313 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd51e01-d6ed-4395-9aa2-211976da81a0-catalog-content\") pod \"redhat-operators-7lgdb\" (UID: \"afd51e01-d6ed-4395-9aa2-211976da81a0\") " pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:38:54 crc kubenswrapper[4823]: I0126 17:38:54.118826 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd51e01-d6ed-4395-9aa2-211976da81a0-utilities\") pod \"redhat-operators-7lgdb\" (UID: \"afd51e01-d6ed-4395-9aa2-211976da81a0\") " pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:38:54 crc kubenswrapper[4823]: I0126 17:38:54.118905 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd51e01-d6ed-4395-9aa2-211976da81a0-catalog-content\") pod \"redhat-operators-7lgdb\" (UID: \"afd51e01-d6ed-4395-9aa2-211976da81a0\") " pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:38:54 crc kubenswrapper[4823]: I0126 17:38:54.143069 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grtgc\" (UniqueName: \"kubernetes.io/projected/afd51e01-d6ed-4395-9aa2-211976da81a0-kube-api-access-grtgc\") pod \"redhat-operators-7lgdb\" (UID: \"afd51e01-d6ed-4395-9aa2-211976da81a0\") " pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:38:54 crc kubenswrapper[4823]: I0126 17:38:54.164236 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:38:54 crc kubenswrapper[4823]: I0126 17:38:54.712980 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7lgdb"] Jan 26 17:38:55 crc kubenswrapper[4823]: I0126 17:38:55.674264 4823 generic.go:334] "Generic (PLEG): container finished" podID="afd51e01-d6ed-4395-9aa2-211976da81a0" containerID="7aa9906fa9afb96910cd815b680f9aeb7840a2a34a1c2e3fcc938fc7cd5afd20" exitCode=0 Jan 26 17:38:55 crc kubenswrapper[4823]: I0126 17:38:55.674361 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgdb" event={"ID":"afd51e01-d6ed-4395-9aa2-211976da81a0","Type":"ContainerDied","Data":"7aa9906fa9afb96910cd815b680f9aeb7840a2a34a1c2e3fcc938fc7cd5afd20"} Jan 26 17:38:55 crc kubenswrapper[4823]: I0126 17:38:55.674637 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgdb" event={"ID":"afd51e01-d6ed-4395-9aa2-211976da81a0","Type":"ContainerStarted","Data":"d4f4422eccedc3b665b417768a11d829e1c0aef6a2127b485a81b43a5b576373"} Jan 26 17:38:55 crc kubenswrapper[4823]: I0126 17:38:55.676546 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:38:57 crc kubenswrapper[4823]: I0126 17:38:57.705998 4823 generic.go:334] "Generic (PLEG): container finished" podID="afd51e01-d6ed-4395-9aa2-211976da81a0" containerID="913d5067e47091c8c4306a3d2a4093c964a498b2f9946b831c76fcf8e5e7e95d" exitCode=0 Jan 26 17:38:57 crc kubenswrapper[4823]: I0126 17:38:57.706068 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgdb" event={"ID":"afd51e01-d6ed-4395-9aa2-211976da81a0","Type":"ContainerDied","Data":"913d5067e47091c8c4306a3d2a4093c964a498b2f9946b831c76fcf8e5e7e95d"} Jan 26 17:38:58 crc kubenswrapper[4823]: I0126 17:38:58.735160 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgdb" event={"ID":"afd51e01-d6ed-4395-9aa2-211976da81a0","Type":"ContainerStarted","Data":"40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b"} Jan 26 17:38:58 crc kubenswrapper[4823]: I0126 17:38:58.769684 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7lgdb" podStartSLOduration=3.358754576 podStartE2EDuration="5.769651533s" podCreationTimestamp="2026-01-26 17:38:53 +0000 UTC" firstStartedPulling="2026-01-26 17:38:55.676194334 +0000 UTC m=+10332.361657439" lastFinishedPulling="2026-01-26 17:38:58.087091291 +0000 UTC m=+10334.772554396" observedRunningTime="2026-01-26 17:38:58.754972573 +0000 UTC m=+10335.440435708" watchObservedRunningTime="2026-01-26 17:38:58.769651533 +0000 UTC m=+10335.455114678" Jan 26 17:39:04 crc kubenswrapper[4823]: I0126 17:39:04.165650 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:39:04 crc kubenswrapper[4823]: I0126 17:39:04.166236 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:39:04 crc kubenswrapper[4823]: I0126 17:39:04.211853 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:39:04 crc kubenswrapper[4823]: I0126 17:39:04.829331 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:39:04 crc kubenswrapper[4823]: I0126 17:39:04.879159 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lgdb"] Jan 26 17:39:06 crc kubenswrapper[4823]: I0126 17:39:06.799588 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7lgdb" podUID="afd51e01-d6ed-4395-9aa2-211976da81a0" containerName="registry-server" containerID="cri-o://40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b" gracePeriod=2 Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.631724 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.811032 4823 generic.go:334] "Generic (PLEG): container finished" podID="afd51e01-d6ed-4395-9aa2-211976da81a0" containerID="40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b" exitCode=0 Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.811077 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgdb" event={"ID":"afd51e01-d6ed-4395-9aa2-211976da81a0","Type":"ContainerDied","Data":"40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b"} Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.811131 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7lgdb" event={"ID":"afd51e01-d6ed-4395-9aa2-211976da81a0","Type":"ContainerDied","Data":"d4f4422eccedc3b665b417768a11d829e1c0aef6a2127b485a81b43a5b576373"} Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.811094 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7lgdb" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.811151 4823 scope.go:117] "RemoveContainer" containerID="40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.812052 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd51e01-d6ed-4395-9aa2-211976da81a0-catalog-content\") pod \"afd51e01-d6ed-4395-9aa2-211976da81a0\" (UID: \"afd51e01-d6ed-4395-9aa2-211976da81a0\") " Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.812096 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grtgc\" (UniqueName: \"kubernetes.io/projected/afd51e01-d6ed-4395-9aa2-211976da81a0-kube-api-access-grtgc\") pod \"afd51e01-d6ed-4395-9aa2-211976da81a0\" (UID: \"afd51e01-d6ed-4395-9aa2-211976da81a0\") " Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.812163 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd51e01-d6ed-4395-9aa2-211976da81a0-utilities\") pod \"afd51e01-d6ed-4395-9aa2-211976da81a0\" (UID: \"afd51e01-d6ed-4395-9aa2-211976da81a0\") " Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.813553 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd51e01-d6ed-4395-9aa2-211976da81a0-utilities" (OuterVolumeSpecName: "utilities") pod "afd51e01-d6ed-4395-9aa2-211976da81a0" (UID: "afd51e01-d6ed-4395-9aa2-211976da81a0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.823648 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd51e01-d6ed-4395-9aa2-211976da81a0-kube-api-access-grtgc" (OuterVolumeSpecName: "kube-api-access-grtgc") pod "afd51e01-d6ed-4395-9aa2-211976da81a0" (UID: "afd51e01-d6ed-4395-9aa2-211976da81a0"). InnerVolumeSpecName "kube-api-access-grtgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.888743 4823 scope.go:117] "RemoveContainer" containerID="913d5067e47091c8c4306a3d2a4093c964a498b2f9946b831c76fcf8e5e7e95d" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.907523 4823 scope.go:117] "RemoveContainer" containerID="7aa9906fa9afb96910cd815b680f9aeb7840a2a34a1c2e3fcc938fc7cd5afd20" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.914723 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grtgc\" (UniqueName: \"kubernetes.io/projected/afd51e01-d6ed-4395-9aa2-211976da81a0-kube-api-access-grtgc\") on node \"crc\" DevicePath \"\"" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.914770 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd51e01-d6ed-4395-9aa2-211976da81a0-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.958707 4823 scope.go:117] "RemoveContainer" containerID="40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b" Jan 26 17:39:07 crc kubenswrapper[4823]: E0126 17:39:07.959629 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b\": container with ID starting with 40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b not found: ID does not exist" containerID="40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.959674 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b"} err="failed to get container status \"40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b\": rpc error: code = NotFound desc = could not find container \"40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b\": container with ID starting with 40d4695c5f1c40fef3db74f6dbf4f39a343d6f769a52cbfcee00878f4650118b not found: ID does not exist" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.959702 4823 scope.go:117] "RemoveContainer" containerID="913d5067e47091c8c4306a3d2a4093c964a498b2f9946b831c76fcf8e5e7e95d" Jan 26 17:39:07 crc kubenswrapper[4823]: E0126 17:39:07.960479 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"913d5067e47091c8c4306a3d2a4093c964a498b2f9946b831c76fcf8e5e7e95d\": container with ID starting with 913d5067e47091c8c4306a3d2a4093c964a498b2f9946b831c76fcf8e5e7e95d not found: ID does not exist" containerID="913d5067e47091c8c4306a3d2a4093c964a498b2f9946b831c76fcf8e5e7e95d" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.960519 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"913d5067e47091c8c4306a3d2a4093c964a498b2f9946b831c76fcf8e5e7e95d"} err="failed to get container status \"913d5067e47091c8c4306a3d2a4093c964a498b2f9946b831c76fcf8e5e7e95d\": rpc error: code = NotFound desc = could not find container \"913d5067e47091c8c4306a3d2a4093c964a498b2f9946b831c76fcf8e5e7e95d\": container with ID starting with 913d5067e47091c8c4306a3d2a4093c964a498b2f9946b831c76fcf8e5e7e95d not found: ID does not exist" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.960545 4823 scope.go:117] "RemoveContainer" containerID="7aa9906fa9afb96910cd815b680f9aeb7840a2a34a1c2e3fcc938fc7cd5afd20" Jan 26 17:39:07 crc kubenswrapper[4823]: E0126 17:39:07.961125 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7aa9906fa9afb96910cd815b680f9aeb7840a2a34a1c2e3fcc938fc7cd5afd20\": container with ID starting with 7aa9906fa9afb96910cd815b680f9aeb7840a2a34a1c2e3fcc938fc7cd5afd20 not found: ID does not exist" containerID="7aa9906fa9afb96910cd815b680f9aeb7840a2a34a1c2e3fcc938fc7cd5afd20" Jan 26 17:39:07 crc kubenswrapper[4823]: I0126 17:39:07.961163 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7aa9906fa9afb96910cd815b680f9aeb7840a2a34a1c2e3fcc938fc7cd5afd20"} err="failed to get container status \"7aa9906fa9afb96910cd815b680f9aeb7840a2a34a1c2e3fcc938fc7cd5afd20\": rpc error: code = NotFound desc = could not find container \"7aa9906fa9afb96910cd815b680f9aeb7840a2a34a1c2e3fcc938fc7cd5afd20\": container with ID starting with 7aa9906fa9afb96910cd815b680f9aeb7840a2a34a1c2e3fcc938fc7cd5afd20 not found: ID does not exist" Jan 26 17:39:08 crc kubenswrapper[4823]: I0126 17:39:08.642012 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd51e01-d6ed-4395-9aa2-211976da81a0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afd51e01-d6ed-4395-9aa2-211976da81a0" (UID: "afd51e01-d6ed-4395-9aa2-211976da81a0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:39:08 crc kubenswrapper[4823]: I0126 17:39:08.730852 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd51e01-d6ed-4395-9aa2-211976da81a0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:39:08 crc kubenswrapper[4823]: I0126 17:39:08.756305 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7lgdb"] Jan 26 17:39:08 crc kubenswrapper[4823]: I0126 17:39:08.773459 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7lgdb"] Jan 26 17:39:09 crc kubenswrapper[4823]: I0126 17:39:09.570592 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd51e01-d6ed-4395-9aa2-211976da81a0" path="/var/lib/kubelet/pods/afd51e01-d6ed-4395-9aa2-211976da81a0/volumes" Jan 26 17:40:04 crc kubenswrapper[4823]: I0126 17:40:04.507930 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:40:04 crc kubenswrapper[4823]: I0126 17:40:04.509731 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:40:34 crc kubenswrapper[4823]: I0126 17:40:34.508309 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:40:34 crc kubenswrapper[4823]: I0126 17:40:34.508844 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:41:04 crc kubenswrapper[4823]: I0126 17:41:04.508088 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:41:04 crc kubenswrapper[4823]: I0126 17:41:04.508651 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:41:04 crc kubenswrapper[4823]: I0126 17:41:04.508725 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 17:41:04 crc kubenswrapper[4823]: I0126 17:41:04.509545 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd47b5961f1639f406deee6ff4351184e482b9fc6d4c8ae8785fbba7b03021ea"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:41:04 crc kubenswrapper[4823]: I0126 17:41:04.509600 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://cd47b5961f1639f406deee6ff4351184e482b9fc6d4c8ae8785fbba7b03021ea" gracePeriod=600 Jan 26 17:41:05 crc kubenswrapper[4823]: I0126 17:41:05.847281 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="cd47b5961f1639f406deee6ff4351184e482b9fc6d4c8ae8785fbba7b03021ea" exitCode=0 Jan 26 17:41:05 crc kubenswrapper[4823]: I0126 17:41:05.847350 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"cd47b5961f1639f406deee6ff4351184e482b9fc6d4c8ae8785fbba7b03021ea"} Jan 26 17:41:05 crc kubenswrapper[4823]: I0126 17:41:05.848017 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6"} Jan 26 17:41:05 crc kubenswrapper[4823]: I0126 17:41:05.848056 4823 scope.go:117] "RemoveContainer" containerID="0edcbb3d6311c553fe4319d76e9c156a17bb29aa2adddb72366ad91e25701d07" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.625570 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ckm7w/must-gather-jqwb8"] Jan 26 17:43:12 crc kubenswrapper[4823]: E0126 17:43:12.626482 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd51e01-d6ed-4395-9aa2-211976da81a0" containerName="registry-server" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.626494 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd51e01-d6ed-4395-9aa2-211976da81a0" containerName="registry-server" Jan 26 17:43:12 crc kubenswrapper[4823]: E0126 17:43:12.626510 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd51e01-d6ed-4395-9aa2-211976da81a0" containerName="extract-utilities" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.626518 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd51e01-d6ed-4395-9aa2-211976da81a0" containerName="extract-utilities" Jan 26 17:43:12 crc kubenswrapper[4823]: E0126 17:43:12.626534 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd51e01-d6ed-4395-9aa2-211976da81a0" containerName="extract-content" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.626541 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd51e01-d6ed-4395-9aa2-211976da81a0" containerName="extract-content" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.626718 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd51e01-d6ed-4395-9aa2-211976da81a0" containerName="registry-server" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.627730 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/must-gather-jqwb8" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.630445 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ckm7w"/"openshift-service-ca.crt" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.630659 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ckm7w"/"kube-root-ca.crt" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.630835 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ckm7w"/"default-dockercfg-drcqd" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.635049 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ckm7w/must-gather-jqwb8"] Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.755585 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnhf7\" (UniqueName: \"kubernetes.io/projected/8542386b-e3e4-47b9-ad0f-aea78951dd82-kube-api-access-hnhf7\") pod \"must-gather-jqwb8\" (UID: \"8542386b-e3e4-47b9-ad0f-aea78951dd82\") " pod="openshift-must-gather-ckm7w/must-gather-jqwb8" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.755931 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8542386b-e3e4-47b9-ad0f-aea78951dd82-must-gather-output\") pod \"must-gather-jqwb8\" (UID: \"8542386b-e3e4-47b9-ad0f-aea78951dd82\") " pod="openshift-must-gather-ckm7w/must-gather-jqwb8" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.857643 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnhf7\" (UniqueName: \"kubernetes.io/projected/8542386b-e3e4-47b9-ad0f-aea78951dd82-kube-api-access-hnhf7\") pod \"must-gather-jqwb8\" (UID: \"8542386b-e3e4-47b9-ad0f-aea78951dd82\") " pod="openshift-must-gather-ckm7w/must-gather-jqwb8" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.857745 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8542386b-e3e4-47b9-ad0f-aea78951dd82-must-gather-output\") pod \"must-gather-jqwb8\" (UID: \"8542386b-e3e4-47b9-ad0f-aea78951dd82\") " pod="openshift-must-gather-ckm7w/must-gather-jqwb8" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.858247 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8542386b-e3e4-47b9-ad0f-aea78951dd82-must-gather-output\") pod \"must-gather-jqwb8\" (UID: \"8542386b-e3e4-47b9-ad0f-aea78951dd82\") " pod="openshift-must-gather-ckm7w/must-gather-jqwb8" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.877276 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnhf7\" (UniqueName: \"kubernetes.io/projected/8542386b-e3e4-47b9-ad0f-aea78951dd82-kube-api-access-hnhf7\") pod \"must-gather-jqwb8\" (UID: \"8542386b-e3e4-47b9-ad0f-aea78951dd82\") " pod="openshift-must-gather-ckm7w/must-gather-jqwb8" Jan 26 17:43:12 crc kubenswrapper[4823]: I0126 17:43:12.947154 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/must-gather-jqwb8" Jan 26 17:43:13 crc kubenswrapper[4823]: I0126 17:43:13.693002 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ckm7w/must-gather-jqwb8"] Jan 26 17:43:14 crc kubenswrapper[4823]: I0126 17:43:14.021609 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/must-gather-jqwb8" event={"ID":"8542386b-e3e4-47b9-ad0f-aea78951dd82","Type":"ContainerStarted","Data":"d53ed7f0ba2f666e4fbb6cfd50c345976f48efa445190ad050fa7ac2b139d6a5"} Jan 26 17:43:22 crc kubenswrapper[4823]: I0126 17:43:22.100910 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/must-gather-jqwb8" event={"ID":"8542386b-e3e4-47b9-ad0f-aea78951dd82","Type":"ContainerStarted","Data":"948c5cdd0015dcd0d469e7c7ef3668f57b62627909bd9c38bf7492eeb13177a7"} Jan 26 17:43:22 crc kubenswrapper[4823]: I0126 17:43:22.101484 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/must-gather-jqwb8" event={"ID":"8542386b-e3e4-47b9-ad0f-aea78951dd82","Type":"ContainerStarted","Data":"47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4"} Jan 26 17:43:22 crc kubenswrapper[4823]: I0126 17:43:22.124452 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ckm7w/must-gather-jqwb8" podStartSLOduration=2.958954141 podStartE2EDuration="10.124434307s" podCreationTimestamp="2026-01-26 17:43:12 +0000 UTC" firstStartedPulling="2026-01-26 17:43:13.736741791 +0000 UTC m=+10590.422204896" lastFinishedPulling="2026-01-26 17:43:20.902221957 +0000 UTC m=+10597.587685062" observedRunningTime="2026-01-26 17:43:22.116302485 +0000 UTC m=+10598.801765610" watchObservedRunningTime="2026-01-26 17:43:22.124434307 +0000 UTC m=+10598.809897412" Jan 26 17:43:27 crc kubenswrapper[4823]: E0126 17:43:27.462985 4823 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.106:43720->38.102.83.106:32927: write tcp 38.102.83.106:43720->38.102.83.106:32927: write: broken pipe Jan 26 17:43:28 crc kubenswrapper[4823]: I0126 17:43:28.756604 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ckm7w/crc-debug-pmh5x"] Jan 26 17:43:28 crc kubenswrapper[4823]: I0126 17:43:28.760119 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" Jan 26 17:43:28 crc kubenswrapper[4823]: I0126 17:43:28.905750 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b74c8655-bbd8-4105-af2e-0b5f1f08c34b-host\") pod \"crc-debug-pmh5x\" (UID: \"b74c8655-bbd8-4105-af2e-0b5f1f08c34b\") " pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" Jan 26 17:43:28 crc kubenswrapper[4823]: I0126 17:43:28.905860 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nllkx\" (UniqueName: \"kubernetes.io/projected/b74c8655-bbd8-4105-af2e-0b5f1f08c34b-kube-api-access-nllkx\") pod \"crc-debug-pmh5x\" (UID: \"b74c8655-bbd8-4105-af2e-0b5f1f08c34b\") " pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" Jan 26 17:43:29 crc kubenswrapper[4823]: I0126 17:43:29.007878 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b74c8655-bbd8-4105-af2e-0b5f1f08c34b-host\") pod \"crc-debug-pmh5x\" (UID: \"b74c8655-bbd8-4105-af2e-0b5f1f08c34b\") " pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" Jan 26 17:43:29 crc kubenswrapper[4823]: I0126 17:43:29.007976 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b74c8655-bbd8-4105-af2e-0b5f1f08c34b-host\") pod \"crc-debug-pmh5x\" (UID: \"b74c8655-bbd8-4105-af2e-0b5f1f08c34b\") " pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" Jan 26 17:43:29 crc kubenswrapper[4823]: I0126 17:43:29.008297 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nllkx\" (UniqueName: \"kubernetes.io/projected/b74c8655-bbd8-4105-af2e-0b5f1f08c34b-kube-api-access-nllkx\") pod \"crc-debug-pmh5x\" (UID: \"b74c8655-bbd8-4105-af2e-0b5f1f08c34b\") " pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" Jan 26 17:43:29 crc kubenswrapper[4823]: I0126 17:43:29.027125 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nllkx\" (UniqueName: \"kubernetes.io/projected/b74c8655-bbd8-4105-af2e-0b5f1f08c34b-kube-api-access-nllkx\") pod \"crc-debug-pmh5x\" (UID: \"b74c8655-bbd8-4105-af2e-0b5f1f08c34b\") " pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" Jan 26 17:43:29 crc kubenswrapper[4823]: I0126 17:43:29.091327 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" Jan 26 17:43:29 crc kubenswrapper[4823]: I0126 17:43:29.172882 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" event={"ID":"b74c8655-bbd8-4105-af2e-0b5f1f08c34b","Type":"ContainerStarted","Data":"ee595b60b76fed92979104d603d4ab99d3c10bc620ea85e93f1a4a51b9534e19"} Jan 26 17:43:34 crc kubenswrapper[4823]: I0126 17:43:34.508697 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:43:34 crc kubenswrapper[4823]: I0126 17:43:34.510477 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:43:42 crc kubenswrapper[4823]: I0126 17:43:42.326730 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" event={"ID":"b74c8655-bbd8-4105-af2e-0b5f1f08c34b","Type":"ContainerStarted","Data":"3581e99b11133badcf835eb6dc37cfe77d0845ef9b729cc198e1aa95e3593170"} Jan 26 17:43:42 crc kubenswrapper[4823]: I0126 17:43:42.350268 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" podStartSLOduration=1.946137136 podStartE2EDuration="14.350248394s" podCreationTimestamp="2026-01-26 17:43:28 +0000 UTC" firstStartedPulling="2026-01-26 17:43:29.149383172 +0000 UTC m=+10605.834846277" lastFinishedPulling="2026-01-26 17:43:41.55349443 +0000 UTC m=+10618.238957535" observedRunningTime="2026-01-26 17:43:42.343704836 +0000 UTC m=+10619.029167951" watchObservedRunningTime="2026-01-26 17:43:42.350248394 +0000 UTC m=+10619.035711499" Jan 26 17:44:04 crc kubenswrapper[4823]: I0126 17:44:04.508351 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:44:04 crc kubenswrapper[4823]: I0126 17:44:04.508884 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:44:30 crc kubenswrapper[4823]: I0126 17:44:30.794494 4823 generic.go:334] "Generic (PLEG): container finished" podID="b74c8655-bbd8-4105-af2e-0b5f1f08c34b" containerID="3581e99b11133badcf835eb6dc37cfe77d0845ef9b729cc198e1aa95e3593170" exitCode=0 Jan 26 17:44:30 crc kubenswrapper[4823]: I0126 17:44:30.794574 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" event={"ID":"b74c8655-bbd8-4105-af2e-0b5f1f08c34b","Type":"ContainerDied","Data":"3581e99b11133badcf835eb6dc37cfe77d0845ef9b729cc198e1aa95e3593170"} Jan 26 17:44:31 crc kubenswrapper[4823]: I0126 17:44:31.908793 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" Jan 26 17:44:31 crc kubenswrapper[4823]: I0126 17:44:31.970512 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ckm7w/crc-debug-pmh5x"] Jan 26 17:44:31 crc kubenswrapper[4823]: I0126 17:44:31.979295 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ckm7w/crc-debug-pmh5x"] Jan 26 17:44:31 crc kubenswrapper[4823]: I0126 17:44:31.990498 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nllkx\" (UniqueName: \"kubernetes.io/projected/b74c8655-bbd8-4105-af2e-0b5f1f08c34b-kube-api-access-nllkx\") pod \"b74c8655-bbd8-4105-af2e-0b5f1f08c34b\" (UID: \"b74c8655-bbd8-4105-af2e-0b5f1f08c34b\") " Jan 26 17:44:31 crc kubenswrapper[4823]: I0126 17:44:31.990723 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b74c8655-bbd8-4105-af2e-0b5f1f08c34b-host\") pod \"b74c8655-bbd8-4105-af2e-0b5f1f08c34b\" (UID: \"b74c8655-bbd8-4105-af2e-0b5f1f08c34b\") " Jan 26 17:44:31 crc kubenswrapper[4823]: I0126 17:44:31.991241 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b74c8655-bbd8-4105-af2e-0b5f1f08c34b-host" (OuterVolumeSpecName: "host") pod "b74c8655-bbd8-4105-af2e-0b5f1f08c34b" (UID: "b74c8655-bbd8-4105-af2e-0b5f1f08c34b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:44:31 crc kubenswrapper[4823]: I0126 17:44:31.997814 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b74c8655-bbd8-4105-af2e-0b5f1f08c34b-kube-api-access-nllkx" (OuterVolumeSpecName: "kube-api-access-nllkx") pod "b74c8655-bbd8-4105-af2e-0b5f1f08c34b" (UID: "b74c8655-bbd8-4105-af2e-0b5f1f08c34b"). InnerVolumeSpecName "kube-api-access-nllkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:32 crc kubenswrapper[4823]: I0126 17:44:32.093493 4823 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b74c8655-bbd8-4105-af2e-0b5f1f08c34b-host\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:32 crc kubenswrapper[4823]: I0126 17:44:32.093537 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nllkx\" (UniqueName: \"kubernetes.io/projected/b74c8655-bbd8-4105-af2e-0b5f1f08c34b-kube-api-access-nllkx\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:32 crc kubenswrapper[4823]: I0126 17:44:32.813760 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee595b60b76fed92979104d603d4ab99d3c10bc620ea85e93f1a4a51b9534e19" Jan 26 17:44:32 crc kubenswrapper[4823]: I0126 17:44:32.813830 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/crc-debug-pmh5x" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.156948 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ckm7w/crc-debug-wwjxt"] Jan 26 17:44:33 crc kubenswrapper[4823]: E0126 17:44:33.157592 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b74c8655-bbd8-4105-af2e-0b5f1f08c34b" containerName="container-00" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.157603 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="b74c8655-bbd8-4105-af2e-0b5f1f08c34b" containerName="container-00" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.157799 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="b74c8655-bbd8-4105-af2e-0b5f1f08c34b" containerName="container-00" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.158437 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.217020 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a1d58b1-34f0-4f28-90e5-628bdc360ed4-host\") pod \"crc-debug-wwjxt\" (UID: \"3a1d58b1-34f0-4f28-90e5-628bdc360ed4\") " pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.217120 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x22b6\" (UniqueName: \"kubernetes.io/projected/3a1d58b1-34f0-4f28-90e5-628bdc360ed4-kube-api-access-x22b6\") pod \"crc-debug-wwjxt\" (UID: \"3a1d58b1-34f0-4f28-90e5-628bdc360ed4\") " pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.319984 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x22b6\" (UniqueName: \"kubernetes.io/projected/3a1d58b1-34f0-4f28-90e5-628bdc360ed4-kube-api-access-x22b6\") pod \"crc-debug-wwjxt\" (UID: \"3a1d58b1-34f0-4f28-90e5-628bdc360ed4\") " pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.320416 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a1d58b1-34f0-4f28-90e5-628bdc360ed4-host\") pod \"crc-debug-wwjxt\" (UID: \"3a1d58b1-34f0-4f28-90e5-628bdc360ed4\") " pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.320685 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a1d58b1-34f0-4f28-90e5-628bdc360ed4-host\") pod \"crc-debug-wwjxt\" (UID: \"3a1d58b1-34f0-4f28-90e5-628bdc360ed4\") " pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.339777 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x22b6\" (UniqueName: \"kubernetes.io/projected/3a1d58b1-34f0-4f28-90e5-628bdc360ed4-kube-api-access-x22b6\") pod \"crc-debug-wwjxt\" (UID: \"3a1d58b1-34f0-4f28-90e5-628bdc360ed4\") " pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.480388 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.580951 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b74c8655-bbd8-4105-af2e-0b5f1f08c34b" path="/var/lib/kubelet/pods/b74c8655-bbd8-4105-af2e-0b5f1f08c34b/volumes" Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.823567 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" event={"ID":"3a1d58b1-34f0-4f28-90e5-628bdc360ed4","Type":"ContainerStarted","Data":"b3a02ed33de1aeadfe0129346b0b44d745ef53011fbabaf727544cc66af294e9"} Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.823854 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" event={"ID":"3a1d58b1-34f0-4f28-90e5-628bdc360ed4","Type":"ContainerStarted","Data":"f07eca0523263f970b714e9974ab857b34bdaa981ef44189701cc26edb551444"} Jan 26 17:44:33 crc kubenswrapper[4823]: I0126 17:44:33.845940 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" podStartSLOduration=0.845922402 podStartE2EDuration="845.922402ms" podCreationTimestamp="2026-01-26 17:44:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-26 17:44:33.838787178 +0000 UTC m=+10670.524250283" watchObservedRunningTime="2026-01-26 17:44:33.845922402 +0000 UTC m=+10670.531385507" Jan 26 17:44:34 crc kubenswrapper[4823]: I0126 17:44:34.507745 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:44:34 crc kubenswrapper[4823]: I0126 17:44:34.507800 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:44:34 crc kubenswrapper[4823]: I0126 17:44:34.507840 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 17:44:34 crc kubenswrapper[4823]: I0126 17:44:34.508573 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:44:34 crc kubenswrapper[4823]: I0126 17:44:34.508618 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" gracePeriod=600 Jan 26 17:44:34 crc kubenswrapper[4823]: E0126 17:44:34.632214 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:44:34 crc kubenswrapper[4823]: I0126 17:44:34.834696 4823 generic.go:334] "Generic (PLEG): container finished" podID="3a1d58b1-34f0-4f28-90e5-628bdc360ed4" containerID="b3a02ed33de1aeadfe0129346b0b44d745ef53011fbabaf727544cc66af294e9" exitCode=0 Jan 26 17:44:34 crc kubenswrapper[4823]: I0126 17:44:34.834746 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" event={"ID":"3a1d58b1-34f0-4f28-90e5-628bdc360ed4","Type":"ContainerDied","Data":"b3a02ed33de1aeadfe0129346b0b44d745ef53011fbabaf727544cc66af294e9"} Jan 26 17:44:34 crc kubenswrapper[4823]: I0126 17:44:34.837821 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" exitCode=0 Jan 26 17:44:34 crc kubenswrapper[4823]: I0126 17:44:34.837856 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6"} Jan 26 17:44:34 crc kubenswrapper[4823]: I0126 17:44:34.837883 4823 scope.go:117] "RemoveContainer" containerID="cd47b5961f1639f406deee6ff4351184e482b9fc6d4c8ae8785fbba7b03021ea" Jan 26 17:44:34 crc kubenswrapper[4823]: I0126 17:44:34.838493 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:44:34 crc kubenswrapper[4823]: E0126 17:44:34.838758 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:44:35 crc kubenswrapper[4823]: I0126 17:44:35.949517 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" Jan 26 17:44:36 crc kubenswrapper[4823]: I0126 17:44:36.073135 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a1d58b1-34f0-4f28-90e5-628bdc360ed4-host\") pod \"3a1d58b1-34f0-4f28-90e5-628bdc360ed4\" (UID: \"3a1d58b1-34f0-4f28-90e5-628bdc360ed4\") " Jan 26 17:44:36 crc kubenswrapper[4823]: I0126 17:44:36.073257 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a1d58b1-34f0-4f28-90e5-628bdc360ed4-host" (OuterVolumeSpecName: "host") pod "3a1d58b1-34f0-4f28-90e5-628bdc360ed4" (UID: "3a1d58b1-34f0-4f28-90e5-628bdc360ed4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:44:36 crc kubenswrapper[4823]: I0126 17:44:36.073465 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x22b6\" (UniqueName: \"kubernetes.io/projected/3a1d58b1-34f0-4f28-90e5-628bdc360ed4-kube-api-access-x22b6\") pod \"3a1d58b1-34f0-4f28-90e5-628bdc360ed4\" (UID: \"3a1d58b1-34f0-4f28-90e5-628bdc360ed4\") " Jan 26 17:44:36 crc kubenswrapper[4823]: I0126 17:44:36.074005 4823 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3a1d58b1-34f0-4f28-90e5-628bdc360ed4-host\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:36 crc kubenswrapper[4823]: I0126 17:44:36.082608 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1d58b1-34f0-4f28-90e5-628bdc360ed4-kube-api-access-x22b6" (OuterVolumeSpecName: "kube-api-access-x22b6") pod "3a1d58b1-34f0-4f28-90e5-628bdc360ed4" (UID: "3a1d58b1-34f0-4f28-90e5-628bdc360ed4"). InnerVolumeSpecName "kube-api-access-x22b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:36 crc kubenswrapper[4823]: I0126 17:44:36.176565 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x22b6\" (UniqueName: \"kubernetes.io/projected/3a1d58b1-34f0-4f28-90e5-628bdc360ed4-kube-api-access-x22b6\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:36 crc kubenswrapper[4823]: I0126 17:44:36.857470 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" event={"ID":"3a1d58b1-34f0-4f28-90e5-628bdc360ed4","Type":"ContainerDied","Data":"f07eca0523263f970b714e9974ab857b34bdaa981ef44189701cc26edb551444"} Jan 26 17:44:36 crc kubenswrapper[4823]: I0126 17:44:36.857513 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f07eca0523263f970b714e9974ab857b34bdaa981ef44189701cc26edb551444" Jan 26 17:44:36 crc kubenswrapper[4823]: I0126 17:44:36.857516 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/crc-debug-wwjxt" Jan 26 17:44:36 crc kubenswrapper[4823]: I0126 17:44:36.880178 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ckm7w/crc-debug-wwjxt"] Jan 26 17:44:36 crc kubenswrapper[4823]: I0126 17:44:36.887618 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ckm7w/crc-debug-wwjxt"] Jan 26 17:44:37 crc kubenswrapper[4823]: I0126 17:44:37.574291 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1d58b1-34f0-4f28-90e5-628bdc360ed4" path="/var/lib/kubelet/pods/3a1d58b1-34f0-4f28-90e5-628bdc360ed4/volumes" Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.075277 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ckm7w/crc-debug-vp2ds"] Jan 26 17:44:38 crc kubenswrapper[4823]: E0126 17:44:38.076475 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1d58b1-34f0-4f28-90e5-628bdc360ed4" containerName="container-00" Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.076577 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1d58b1-34f0-4f28-90e5-628bdc360ed4" containerName="container-00" Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.076863 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1d58b1-34f0-4f28-90e5-628bdc360ed4" containerName="container-00" Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.077756 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/crc-debug-vp2ds" Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.216998 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmrcx\" (UniqueName: \"kubernetes.io/projected/abca43d0-97bd-45f6-8ebc-f42a585ed0aa-kube-api-access-hmrcx\") pod \"crc-debug-vp2ds\" (UID: \"abca43d0-97bd-45f6-8ebc-f42a585ed0aa\") " pod="openshift-must-gather-ckm7w/crc-debug-vp2ds" Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.217411 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abca43d0-97bd-45f6-8ebc-f42a585ed0aa-host\") pod \"crc-debug-vp2ds\" (UID: \"abca43d0-97bd-45f6-8ebc-f42a585ed0aa\") " pod="openshift-must-gather-ckm7w/crc-debug-vp2ds" Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.320042 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmrcx\" (UniqueName: \"kubernetes.io/projected/abca43d0-97bd-45f6-8ebc-f42a585ed0aa-kube-api-access-hmrcx\") pod \"crc-debug-vp2ds\" (UID: \"abca43d0-97bd-45f6-8ebc-f42a585ed0aa\") " pod="openshift-must-gather-ckm7w/crc-debug-vp2ds" Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.320186 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abca43d0-97bd-45f6-8ebc-f42a585ed0aa-host\") pod \"crc-debug-vp2ds\" (UID: \"abca43d0-97bd-45f6-8ebc-f42a585ed0aa\") " pod="openshift-must-gather-ckm7w/crc-debug-vp2ds" Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.320292 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abca43d0-97bd-45f6-8ebc-f42a585ed0aa-host\") pod \"crc-debug-vp2ds\" (UID: \"abca43d0-97bd-45f6-8ebc-f42a585ed0aa\") " pod="openshift-must-gather-ckm7w/crc-debug-vp2ds" Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.339876 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmrcx\" (UniqueName: \"kubernetes.io/projected/abca43d0-97bd-45f6-8ebc-f42a585ed0aa-kube-api-access-hmrcx\") pod \"crc-debug-vp2ds\" (UID: \"abca43d0-97bd-45f6-8ebc-f42a585ed0aa\") " pod="openshift-must-gather-ckm7w/crc-debug-vp2ds" Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.412212 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/crc-debug-vp2ds" Jan 26 17:44:38 crc kubenswrapper[4823]: W0126 17:44:38.438913 4823 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podabca43d0_97bd_45f6_8ebc_f42a585ed0aa.slice/crio-5e3fdb1890563b4b6af080a2fbaf05580bd27ef01d6750167dde610739b415d3 WatchSource:0}: Error finding container 5e3fdb1890563b4b6af080a2fbaf05580bd27ef01d6750167dde610739b415d3: Status 404 returned error can't find the container with id 5e3fdb1890563b4b6af080a2fbaf05580bd27ef01d6750167dde610739b415d3 Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.878404 4823 generic.go:334] "Generic (PLEG): container finished" podID="abca43d0-97bd-45f6-8ebc-f42a585ed0aa" containerID="4796c3cb7172a6493b5588b26bb5ab35c7c970a93b143208bcda62b7a43b48f9" exitCode=0 Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.878469 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/crc-debug-vp2ds" event={"ID":"abca43d0-97bd-45f6-8ebc-f42a585ed0aa","Type":"ContainerDied","Data":"4796c3cb7172a6493b5588b26bb5ab35c7c970a93b143208bcda62b7a43b48f9"} Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.878508 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/crc-debug-vp2ds" event={"ID":"abca43d0-97bd-45f6-8ebc-f42a585ed0aa","Type":"ContainerStarted","Data":"5e3fdb1890563b4b6af080a2fbaf05580bd27ef01d6750167dde610739b415d3"} Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.918047 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ckm7w/crc-debug-vp2ds"] Jan 26 17:44:38 crc kubenswrapper[4823]: I0126 17:44:38.927164 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ckm7w/crc-debug-vp2ds"] Jan 26 17:44:39 crc kubenswrapper[4823]: I0126 17:44:39.993756 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/crc-debug-vp2ds" Jan 26 17:44:40 crc kubenswrapper[4823]: I0126 17:44:40.058668 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abca43d0-97bd-45f6-8ebc-f42a585ed0aa-host\") pod \"abca43d0-97bd-45f6-8ebc-f42a585ed0aa\" (UID: \"abca43d0-97bd-45f6-8ebc-f42a585ed0aa\") " Jan 26 17:44:40 crc kubenswrapper[4823]: I0126 17:44:40.058728 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmrcx\" (UniqueName: \"kubernetes.io/projected/abca43d0-97bd-45f6-8ebc-f42a585ed0aa-kube-api-access-hmrcx\") pod \"abca43d0-97bd-45f6-8ebc-f42a585ed0aa\" (UID: \"abca43d0-97bd-45f6-8ebc-f42a585ed0aa\") " Jan 26 17:44:40 crc kubenswrapper[4823]: I0126 17:44:40.059672 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/abca43d0-97bd-45f6-8ebc-f42a585ed0aa-host" (OuterVolumeSpecName: "host") pod "abca43d0-97bd-45f6-8ebc-f42a585ed0aa" (UID: "abca43d0-97bd-45f6-8ebc-f42a585ed0aa"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 26 17:44:40 crc kubenswrapper[4823]: I0126 17:44:40.067250 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abca43d0-97bd-45f6-8ebc-f42a585ed0aa-kube-api-access-hmrcx" (OuterVolumeSpecName: "kube-api-access-hmrcx") pod "abca43d0-97bd-45f6-8ebc-f42a585ed0aa" (UID: "abca43d0-97bd-45f6-8ebc-f42a585ed0aa"). InnerVolumeSpecName "kube-api-access-hmrcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:44:40 crc kubenswrapper[4823]: I0126 17:44:40.161242 4823 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/abca43d0-97bd-45f6-8ebc-f42a585ed0aa-host\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:40 crc kubenswrapper[4823]: I0126 17:44:40.161616 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmrcx\" (UniqueName: \"kubernetes.io/projected/abca43d0-97bd-45f6-8ebc-f42a585ed0aa-kube-api-access-hmrcx\") on node \"crc\" DevicePath \"\"" Jan 26 17:44:40 crc kubenswrapper[4823]: I0126 17:44:40.899082 4823 scope.go:117] "RemoveContainer" containerID="4796c3cb7172a6493b5588b26bb5ab35c7c970a93b143208bcda62b7a43b48f9" Jan 26 17:44:40 crc kubenswrapper[4823]: I0126 17:44:40.899149 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/crc-debug-vp2ds" Jan 26 17:44:41 crc kubenswrapper[4823]: I0126 17:44:41.572522 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abca43d0-97bd-45f6-8ebc-f42a585ed0aa" path="/var/lib/kubelet/pods/abca43d0-97bd-45f6-8ebc-f42a585ed0aa/volumes" Jan 26 17:44:47 crc kubenswrapper[4823]: I0126 17:44:47.979929 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n5d9t"] Jan 26 17:44:47 crc kubenswrapper[4823]: E0126 17:44:47.981864 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abca43d0-97bd-45f6-8ebc-f42a585ed0aa" containerName="container-00" Jan 26 17:44:47 crc kubenswrapper[4823]: I0126 17:44:47.981889 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="abca43d0-97bd-45f6-8ebc-f42a585ed0aa" containerName="container-00" Jan 26 17:44:47 crc kubenswrapper[4823]: I0126 17:44:47.983682 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="abca43d0-97bd-45f6-8ebc-f42a585ed0aa" containerName="container-00" Jan 26 17:44:47 crc kubenswrapper[4823]: I0126 17:44:47.989063 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.004471 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5d9t"] Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.082067 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf265c76-1547-4a80-bdb3-e3d724cece26-catalog-content\") pod \"redhat-marketplace-n5d9t\" (UID: \"bf265c76-1547-4a80-bdb3-e3d724cece26\") " pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.082211 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2prp2\" (UniqueName: \"kubernetes.io/projected/bf265c76-1547-4a80-bdb3-e3d724cece26-kube-api-access-2prp2\") pod \"redhat-marketplace-n5d9t\" (UID: \"bf265c76-1547-4a80-bdb3-e3d724cece26\") " pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.082291 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf265c76-1547-4a80-bdb3-e3d724cece26-utilities\") pod \"redhat-marketplace-n5d9t\" (UID: \"bf265c76-1547-4a80-bdb3-e3d724cece26\") " pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.184533 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf265c76-1547-4a80-bdb3-e3d724cece26-catalog-content\") pod \"redhat-marketplace-n5d9t\" (UID: \"bf265c76-1547-4a80-bdb3-e3d724cece26\") " pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.184722 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2prp2\" (UniqueName: \"kubernetes.io/projected/bf265c76-1547-4a80-bdb3-e3d724cece26-kube-api-access-2prp2\") pod \"redhat-marketplace-n5d9t\" (UID: \"bf265c76-1547-4a80-bdb3-e3d724cece26\") " pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.184824 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf265c76-1547-4a80-bdb3-e3d724cece26-utilities\") pod \"redhat-marketplace-n5d9t\" (UID: \"bf265c76-1547-4a80-bdb3-e3d724cece26\") " pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.185580 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf265c76-1547-4a80-bdb3-e3d724cece26-utilities\") pod \"redhat-marketplace-n5d9t\" (UID: \"bf265c76-1547-4a80-bdb3-e3d724cece26\") " pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.185895 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf265c76-1547-4a80-bdb3-e3d724cece26-catalog-content\") pod \"redhat-marketplace-n5d9t\" (UID: \"bf265c76-1547-4a80-bdb3-e3d724cece26\") " pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.214457 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2prp2\" (UniqueName: \"kubernetes.io/projected/bf265c76-1547-4a80-bdb3-e3d724cece26-kube-api-access-2prp2\") pod \"redhat-marketplace-n5d9t\" (UID: \"bf265c76-1547-4a80-bdb3-e3d724cece26\") " pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.331523 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.560916 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:44:48 crc kubenswrapper[4823]: E0126 17:44:48.561809 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:44:48 crc kubenswrapper[4823]: I0126 17:44:48.865707 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5d9t"] Jan 26 17:44:49 crc kubenswrapper[4823]: I0126 17:44:49.023826 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5d9t" event={"ID":"bf265c76-1547-4a80-bdb3-e3d724cece26","Type":"ContainerStarted","Data":"0bb19e3a7475698a12c28c138094cf3cdaf73a160198a44559c0950b3ce9f600"} Jan 26 17:44:50 crc kubenswrapper[4823]: I0126 17:44:50.036348 4823 generic.go:334] "Generic (PLEG): container finished" podID="bf265c76-1547-4a80-bdb3-e3d724cece26" containerID="1c8fcc0ff6a6cebbe159dcc2ff80b60519cb22cbd5d562ec49a7e00381af10ce" exitCode=0 Jan 26 17:44:50 crc kubenswrapper[4823]: I0126 17:44:50.036430 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5d9t" event={"ID":"bf265c76-1547-4a80-bdb3-e3d724cece26","Type":"ContainerDied","Data":"1c8fcc0ff6a6cebbe159dcc2ff80b60519cb22cbd5d562ec49a7e00381af10ce"} Jan 26 17:44:50 crc kubenswrapper[4823]: I0126 17:44:50.039354 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:44:51 crc kubenswrapper[4823]: I0126 17:44:51.051988 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5d9t" event={"ID":"bf265c76-1547-4a80-bdb3-e3d724cece26","Type":"ContainerStarted","Data":"18635684e1200681cf992b24e2702ad43a58ec4ba92a521eae2f133accb270fd"} Jan 26 17:44:52 crc kubenswrapper[4823]: I0126 17:44:52.061648 4823 generic.go:334] "Generic (PLEG): container finished" podID="bf265c76-1547-4a80-bdb3-e3d724cece26" containerID="18635684e1200681cf992b24e2702ad43a58ec4ba92a521eae2f133accb270fd" exitCode=0 Jan 26 17:44:52 crc kubenswrapper[4823]: I0126 17:44:52.061714 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5d9t" event={"ID":"bf265c76-1547-4a80-bdb3-e3d724cece26","Type":"ContainerDied","Data":"18635684e1200681cf992b24e2702ad43a58ec4ba92a521eae2f133accb270fd"} Jan 26 17:44:53 crc kubenswrapper[4823]: I0126 17:44:53.073923 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5d9t" event={"ID":"bf265c76-1547-4a80-bdb3-e3d724cece26","Type":"ContainerStarted","Data":"811b749b18e83a92a6f668a4ad7f3b51448e251c487c3185116f3df9a6a74f4f"} Jan 26 17:44:53 crc kubenswrapper[4823]: I0126 17:44:53.098947 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n5d9t" podStartSLOduration=3.578120556 podStartE2EDuration="6.098928435s" podCreationTimestamp="2026-01-26 17:44:47 +0000 UTC" firstStartedPulling="2026-01-26 17:44:50.039013918 +0000 UTC m=+10686.724477033" lastFinishedPulling="2026-01-26 17:44:52.559821807 +0000 UTC m=+10689.245284912" observedRunningTime="2026-01-26 17:44:53.094753012 +0000 UTC m=+10689.780216117" watchObservedRunningTime="2026-01-26 17:44:53.098928435 +0000 UTC m=+10689.784391540" Jan 26 17:44:58 crc kubenswrapper[4823]: I0126 17:44:58.334058 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:58 crc kubenswrapper[4823]: I0126 17:44:58.334650 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:58 crc kubenswrapper[4823]: I0126 17:44:58.387672 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:59 crc kubenswrapper[4823]: I0126 17:44:59.222171 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:44:59 crc kubenswrapper[4823]: I0126 17:44:59.295482 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5d9t"] Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.163916 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f"] Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.166621 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.170131 4823 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.170242 4823 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.176768 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f"] Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.317039 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920282f1-0d58-49c1-9ff3-eb722e983305-config-volume\") pod \"collect-profiles-29490825-pzf6f\" (UID: \"920282f1-0d58-49c1-9ff3-eb722e983305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.317171 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/920282f1-0d58-49c1-9ff3-eb722e983305-secret-volume\") pod \"collect-profiles-29490825-pzf6f\" (UID: \"920282f1-0d58-49c1-9ff3-eb722e983305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.317248 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rl57\" (UniqueName: \"kubernetes.io/projected/920282f1-0d58-49c1-9ff3-eb722e983305-kube-api-access-9rl57\") pod \"collect-profiles-29490825-pzf6f\" (UID: \"920282f1-0d58-49c1-9ff3-eb722e983305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.419555 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920282f1-0d58-49c1-9ff3-eb722e983305-config-volume\") pod \"collect-profiles-29490825-pzf6f\" (UID: \"920282f1-0d58-49c1-9ff3-eb722e983305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.419598 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/920282f1-0d58-49c1-9ff3-eb722e983305-secret-volume\") pod \"collect-profiles-29490825-pzf6f\" (UID: \"920282f1-0d58-49c1-9ff3-eb722e983305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.419628 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rl57\" (UniqueName: \"kubernetes.io/projected/920282f1-0d58-49c1-9ff3-eb722e983305-kube-api-access-9rl57\") pod \"collect-profiles-29490825-pzf6f\" (UID: \"920282f1-0d58-49c1-9ff3-eb722e983305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.422627 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920282f1-0d58-49c1-9ff3-eb722e983305-config-volume\") pod \"collect-profiles-29490825-pzf6f\" (UID: \"920282f1-0d58-49c1-9ff3-eb722e983305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.427006 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/920282f1-0d58-49c1-9ff3-eb722e983305-secret-volume\") pod \"collect-profiles-29490825-pzf6f\" (UID: \"920282f1-0d58-49c1-9ff3-eb722e983305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.436784 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rl57\" (UniqueName: \"kubernetes.io/projected/920282f1-0d58-49c1-9ff3-eb722e983305-kube-api-access-9rl57\") pod \"collect-profiles-29490825-pzf6f\" (UID: \"920282f1-0d58-49c1-9ff3-eb722e983305\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.498692 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:00 crc kubenswrapper[4823]: I0126 17:45:00.959066 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f"] Jan 26 17:45:01 crc kubenswrapper[4823]: I0126 17:45:01.152277 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n5d9t" podUID="bf265c76-1547-4a80-bdb3-e3d724cece26" containerName="registry-server" containerID="cri-o://811b749b18e83a92a6f668a4ad7f3b51448e251c487c3185116f3df9a6a74f4f" gracePeriod=2 Jan 26 17:45:01 crc kubenswrapper[4823]: I0126 17:45:01.152694 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" event={"ID":"920282f1-0d58-49c1-9ff3-eb722e983305","Type":"ContainerStarted","Data":"ea14f302907ea390ffdf38657f15959bb005decec2885707bc04a939919dabbe"} Jan 26 17:45:01 crc kubenswrapper[4823]: I0126 17:45:01.561642 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:45:01 crc kubenswrapper[4823]: E0126 17:45:01.562270 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.165280 4823 generic.go:334] "Generic (PLEG): container finished" podID="920282f1-0d58-49c1-9ff3-eb722e983305" containerID="83a4a05e2ee359e1cb09b2f350fc794925aa14ecbae96511d57a247343da1bcd" exitCode=0 Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.165387 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" event={"ID":"920282f1-0d58-49c1-9ff3-eb722e983305","Type":"ContainerDied","Data":"83a4a05e2ee359e1cb09b2f350fc794925aa14ecbae96511d57a247343da1bcd"} Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.169770 4823 generic.go:334] "Generic (PLEG): container finished" podID="bf265c76-1547-4a80-bdb3-e3d724cece26" containerID="811b749b18e83a92a6f668a4ad7f3b51448e251c487c3185116f3df9a6a74f4f" exitCode=0 Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.169811 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5d9t" event={"ID":"bf265c76-1547-4a80-bdb3-e3d724cece26","Type":"ContainerDied","Data":"811b749b18e83a92a6f668a4ad7f3b51448e251c487c3185116f3df9a6a74f4f"} Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.169838 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n5d9t" event={"ID":"bf265c76-1547-4a80-bdb3-e3d724cece26","Type":"ContainerDied","Data":"0bb19e3a7475698a12c28c138094cf3cdaf73a160198a44559c0950b3ce9f600"} Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.169854 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bb19e3a7475698a12c28c138094cf3cdaf73a160198a44559c0950b3ce9f600" Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.239495 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.364632 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf265c76-1547-4a80-bdb3-e3d724cece26-catalog-content\") pod \"bf265c76-1547-4a80-bdb3-e3d724cece26\" (UID: \"bf265c76-1547-4a80-bdb3-e3d724cece26\") " Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.365197 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf265c76-1547-4a80-bdb3-e3d724cece26-utilities\") pod \"bf265c76-1547-4a80-bdb3-e3d724cece26\" (UID: \"bf265c76-1547-4a80-bdb3-e3d724cece26\") " Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.365296 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2prp2\" (UniqueName: \"kubernetes.io/projected/bf265c76-1547-4a80-bdb3-e3d724cece26-kube-api-access-2prp2\") pod \"bf265c76-1547-4a80-bdb3-e3d724cece26\" (UID: \"bf265c76-1547-4a80-bdb3-e3d724cece26\") " Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.366079 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf265c76-1547-4a80-bdb3-e3d724cece26-utilities" (OuterVolumeSpecName: "utilities") pod "bf265c76-1547-4a80-bdb3-e3d724cece26" (UID: "bf265c76-1547-4a80-bdb3-e3d724cece26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.366632 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf265c76-1547-4a80-bdb3-e3d724cece26-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.376650 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf265c76-1547-4a80-bdb3-e3d724cece26-kube-api-access-2prp2" (OuterVolumeSpecName: "kube-api-access-2prp2") pod "bf265c76-1547-4a80-bdb3-e3d724cece26" (UID: "bf265c76-1547-4a80-bdb3-e3d724cece26"). InnerVolumeSpecName "kube-api-access-2prp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.385710 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf265c76-1547-4a80-bdb3-e3d724cece26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf265c76-1547-4a80-bdb3-e3d724cece26" (UID: "bf265c76-1547-4a80-bdb3-e3d724cece26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.469005 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf265c76-1547-4a80-bdb3-e3d724cece26-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:02 crc kubenswrapper[4823]: I0126 17:45:02.469043 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2prp2\" (UniqueName: \"kubernetes.io/projected/bf265c76-1547-4a80-bdb3-e3d724cece26-kube-api-access-2prp2\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.178601 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n5d9t" Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.215803 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5d9t"] Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.226490 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n5d9t"] Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.516167 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.572223 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf265c76-1547-4a80-bdb3-e3d724cece26" path="/var/lib/kubelet/pods/bf265c76-1547-4a80-bdb3-e3d724cece26/volumes" Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.694834 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/920282f1-0d58-49c1-9ff3-eb722e983305-secret-volume\") pod \"920282f1-0d58-49c1-9ff3-eb722e983305\" (UID: \"920282f1-0d58-49c1-9ff3-eb722e983305\") " Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.695017 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rl57\" (UniqueName: \"kubernetes.io/projected/920282f1-0d58-49c1-9ff3-eb722e983305-kube-api-access-9rl57\") pod \"920282f1-0d58-49c1-9ff3-eb722e983305\" (UID: \"920282f1-0d58-49c1-9ff3-eb722e983305\") " Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.695061 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920282f1-0d58-49c1-9ff3-eb722e983305-config-volume\") pod \"920282f1-0d58-49c1-9ff3-eb722e983305\" (UID: \"920282f1-0d58-49c1-9ff3-eb722e983305\") " Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.696115 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/920282f1-0d58-49c1-9ff3-eb722e983305-config-volume" (OuterVolumeSpecName: "config-volume") pod "920282f1-0d58-49c1-9ff3-eb722e983305" (UID: "920282f1-0d58-49c1-9ff3-eb722e983305"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.700302 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/920282f1-0d58-49c1-9ff3-eb722e983305-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "920282f1-0d58-49c1-9ff3-eb722e983305" (UID: "920282f1-0d58-49c1-9ff3-eb722e983305"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.700347 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/920282f1-0d58-49c1-9ff3-eb722e983305-kube-api-access-9rl57" (OuterVolumeSpecName: "kube-api-access-9rl57") pod "920282f1-0d58-49c1-9ff3-eb722e983305" (UID: "920282f1-0d58-49c1-9ff3-eb722e983305"). InnerVolumeSpecName "kube-api-access-9rl57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.797469 4823 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/920282f1-0d58-49c1-9ff3-eb722e983305-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.797505 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rl57\" (UniqueName: \"kubernetes.io/projected/920282f1-0d58-49c1-9ff3-eb722e983305-kube-api-access-9rl57\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:03 crc kubenswrapper[4823]: I0126 17:45:03.797515 4823 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/920282f1-0d58-49c1-9ff3-eb722e983305-config-volume\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:04 crc kubenswrapper[4823]: I0126 17:45:04.187832 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" event={"ID":"920282f1-0d58-49c1-9ff3-eb722e983305","Type":"ContainerDied","Data":"ea14f302907ea390ffdf38657f15959bb005decec2885707bc04a939919dabbe"} Jan 26 17:45:04 crc kubenswrapper[4823]: I0126 17:45:04.188645 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea14f302907ea390ffdf38657f15959bb005decec2885707bc04a939919dabbe" Jan 26 17:45:04 crc kubenswrapper[4823]: I0126 17:45:04.187912 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29490825-pzf6f" Jan 26 17:45:04 crc kubenswrapper[4823]: I0126 17:45:04.583614 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6"] Jan 26 17:45:04 crc kubenswrapper[4823]: I0126 17:45:04.590966 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29490780-pdms6"] Jan 26 17:45:05 crc kubenswrapper[4823]: I0126 17:45:05.570650 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1013816e-b1b5-4182-9275-801ed193469f" path="/var/lib/kubelet/pods/1013816e-b1b5-4182-9275-801ed193469f/volumes" Jan 26 17:45:10 crc kubenswrapper[4823]: I0126 17:45:10.881531 4823 scope.go:117] "RemoveContainer" containerID="1bdf9b2824aa2b31e57b3162875cdb6da43affee5fa1160944657a91ef9aa130" Jan 26 17:45:16 crc kubenswrapper[4823]: I0126 17:45:16.560860 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:45:16 crc kubenswrapper[4823]: E0126 17:45:16.561722 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:45:23 crc kubenswrapper[4823]: I0126 17:45:23.877206 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-67c696b96b-69j89_1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f/barbican-api/0.log" Jan 26 17:45:23 crc kubenswrapper[4823]: I0126 17:45:23.918594 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-67c696b96b-69j89_1edfaf5a-5a23-45ff-b46d-2af9b2a88d3f/barbican-api-log/0.log" Jan 26 17:45:24 crc kubenswrapper[4823]: I0126 17:45:24.079337 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75d68448b6-k48rf_1f23e9d9-0eae-4911-af41-a71424a974f7/barbican-keystone-listener/0.log" Jan 26 17:45:24 crc kubenswrapper[4823]: I0126 17:45:24.232630 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-786584bf8c-z6fpx_636bf50c-43c9-4d39-af26-187a531e84ad/barbican-worker/0.log" Jan 26 17:45:24 crc kubenswrapper[4823]: I0126 17:45:24.360074 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-786584bf8c-z6fpx_636bf50c-43c9-4d39-af26-187a531e84ad/barbican-worker-log/0.log" Jan 26 17:45:24 crc kubenswrapper[4823]: I0126 17:45:24.473594 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-hspxg_18dfc993-b32b-4eae-9258-b6ac5a48e3ba/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:24 crc kubenswrapper[4823]: I0126 17:45:24.561427 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-75d68448b6-k48rf_1f23e9d9-0eae-4911-af41-a71424a974f7/barbican-keystone-listener-log/0.log" Jan 26 17:45:24 crc kubenswrapper[4823]: I0126 17:45:24.678908 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_953ca111-757e-44e8-9f00-1b4576cb4b3c/ceilometer-central-agent/1.log" Jan 26 17:45:24 crc kubenswrapper[4823]: I0126 17:45:24.786875 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_953ca111-757e-44e8-9f00-1b4576cb4b3c/ceilometer-central-agent/0.log" Jan 26 17:45:24 crc kubenswrapper[4823]: I0126 17:45:24.795948 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_953ca111-757e-44e8-9f00-1b4576cb4b3c/ceilometer-notification-agent/0.log" Jan 26 17:45:24 crc kubenswrapper[4823]: I0126 17:45:24.875660 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_953ca111-757e-44e8-9f00-1b4576cb4b3c/sg-core/0.log" Jan 26 17:45:24 crc kubenswrapper[4823]: I0126 17:45:24.909654 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_953ca111-757e-44e8-9f00-1b4576cb4b3c/proxy-httpd/0.log" Jan 26 17:45:24 crc kubenswrapper[4823]: I0126 17:45:24.974869 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-dwvkk_3668c188-085e-4a02-8847-a4ccbd1ab067/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:25 crc kubenswrapper[4823]: I0126 17:45:25.142709 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-rw4zv_0f9c42b3-fbf9-4678-ab39-cf772f154f4b/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:25 crc kubenswrapper[4823]: I0126 17:45:25.456955 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_39eda835-a007-4e12-8a6a-86100eb17105/cinder-api/0.log" Jan 26 17:45:25 crc kubenswrapper[4823]: I0126 17:45:25.505128 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_39eda835-a007-4e12-8a6a-86100eb17105/cinder-api-log/0.log" Jan 26 17:45:25 crc kubenswrapper[4823]: I0126 17:45:25.734914 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_a3ab756b-769b-47fd-8ade-e462a900db55/cinder-backup/0.log" Jan 26 17:45:25 crc kubenswrapper[4823]: I0126 17:45:25.763005 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_a3ab756b-769b-47fd-8ade-e462a900db55/probe/0.log" Jan 26 17:45:25 crc kubenswrapper[4823]: I0126 17:45:25.815409 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f2e373a7-6b26-47ee-9748-da6d2212c1fe/cinder-scheduler/0.log" Jan 26 17:45:26 crc kubenswrapper[4823]: I0126 17:45:26.028499 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_f2e373a7-6b26-47ee-9748-da6d2212c1fe/probe/0.log" Jan 26 17:45:26 crc kubenswrapper[4823]: I0126 17:45:26.036846 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_b85d2c1d-42f6-4e32-a614-e8ddc9e888fa/cinder-volume/0.log" Jan 26 17:45:26 crc kubenswrapper[4823]: I0126 17:45:26.100238 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_b85d2c1d-42f6-4e32-a614-e8ddc9e888fa/probe/0.log" Jan 26 17:45:26 crc kubenswrapper[4823]: I0126 17:45:26.255694 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-vnxd5_cf0b58f0-fc03-49a7-8795-112628f1e6e1/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:26 crc kubenswrapper[4823]: I0126 17:45:26.337082 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-hg6rq_3f4f277f-d919-44c2-b53e-d5f7d1e9dc3a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:26 crc kubenswrapper[4823]: I0126 17:45:26.501499 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-5pz9v_6b032822-a0f5-42d5-81d4-a3804a3714b9/init/0.log" Jan 26 17:45:26 crc kubenswrapper[4823]: I0126 17:45:26.667046 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-5pz9v_6b032822-a0f5-42d5-81d4-a3804a3714b9/init/0.log" Jan 26 17:45:26 crc kubenswrapper[4823]: I0126 17:45:26.799238 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c337780e-6bca-4513-b48d-11b3773ac33b/glance-httpd/0.log" Jan 26 17:45:26 crc kubenswrapper[4823]: I0126 17:45:26.880590 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69655fd4bf-5pz9v_6b032822-a0f5-42d5-81d4-a3804a3714b9/dnsmasq-dns/0.log" Jan 26 17:45:26 crc kubenswrapper[4823]: I0126 17:45:26.924731 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c337780e-6bca-4513-b48d-11b3773ac33b/glance-log/0.log" Jan 26 17:45:27 crc kubenswrapper[4823]: I0126 17:45:27.068653 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f08433cb-bda2-4072-a5cb-b3ca302d032f/glance-log/0.log" Jan 26 17:45:27 crc kubenswrapper[4823]: I0126 17:45:27.106114 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f08433cb-bda2-4072-a5cb-b3ca302d032f/glance-httpd/0.log" Jan 26 17:45:27 crc kubenswrapper[4823]: I0126 17:45:27.427063 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6c6cbf99d4-vbwh8_4c60001f-e43a-4559-ba67-134f88a3f2a6/horizon/0.log" Jan 26 17:45:27 crc kubenswrapper[4823]: I0126 17:45:27.448102 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-f9b47_6b8ae9fa-6766-46fe-9729-3997384f9b41/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:27 crc kubenswrapper[4823]: I0126 17:45:27.634694 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-mj7bv_50405418-70aa-488f-b5e1-dc48b0888adf/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:28 crc kubenswrapper[4823]: I0126 17:45:28.098160 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490721-lbzct_96419d80-48f5-4579-884b-ae8f81f43ff6/keystone-cron/0.log" Jan 26 17:45:28 crc kubenswrapper[4823]: I0126 17:45:28.301507 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29490781-rft4l_9e12af4b-204e-466a-9690-c2d44c25f1cd/keystone-cron/0.log" Jan 26 17:45:28 crc kubenswrapper[4823]: I0126 17:45:28.468502 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_28e0835b-8ae8-4732-883a-65766b6c38a7/kube-state-metrics/0.log" Jan 26 17:45:28 crc kubenswrapper[4823]: I0126 17:45:28.566202 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:45:28 crc kubenswrapper[4823]: E0126 17:45:28.566672 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:45:28 crc kubenswrapper[4823]: I0126 17:45:28.664483 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-cpzc6_b2993a3c-5b24-475d-b1cf-38d4611f55fa/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:28 crc kubenswrapper[4823]: I0126 17:45:28.872169 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_25f20855-79ca-439f-a558-66d82e32988f/manila-api-log/0.log" Jan 26 17:45:29 crc kubenswrapper[4823]: I0126 17:45:29.010774 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-6c6cbf99d4-vbwh8_4c60001f-e43a-4559-ba67-134f88a3f2a6/horizon-log/0.log" Jan 26 17:45:29 crc kubenswrapper[4823]: I0126 17:45:29.154752 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_25f20855-79ca-439f-a558-66d82e32988f/manila-api/0.log" Jan 26 17:45:29 crc kubenswrapper[4823]: I0126 17:45:29.163718 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_a3022a53-0ff5-4e22-9229-9747a29daac9/probe/0.log" Jan 26 17:45:29 crc kubenswrapper[4823]: I0126 17:45:29.207705 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_a3022a53-0ff5-4e22-9229-9747a29daac9/manila-scheduler/0.log" Jan 26 17:45:29 crc kubenswrapper[4823]: I0126 17:45:29.436849 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_27067c33-cf62-4e6d-9f91-7c1867d0b195/probe/0.log" Jan 26 17:45:29 crc kubenswrapper[4823]: I0126 17:45:29.493191 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_27067c33-cf62-4e6d-9f91-7c1867d0b195/manila-share/0.log" Jan 26 17:45:30 crc kubenswrapper[4823]: I0126 17:45:30.102009 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-6phfn_79d82d48-4498-49e0-b395-3d33c0ecdf1a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:30 crc kubenswrapper[4823]: I0126 17:45:30.738737 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-854b94d7cf-txq64_c0e3f717-0113-4cb6-be1c-90a19ddf9ee9/neutron-httpd/0.log" Jan 26 17:45:31 crc kubenswrapper[4823]: I0126 17:45:31.586894 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-76d987df64-77wdm_758cf2bf-d514-4a17-88e5-463286f0a3e9/keystone-api/0.log" Jan 26 17:45:31 crc kubenswrapper[4823]: I0126 17:45:31.682854 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-854b94d7cf-txq64_c0e3f717-0113-4cb6-be1c-90a19ddf9ee9/neutron-api/0.log" Jan 26 17:45:32 crc kubenswrapper[4823]: I0126 17:45:32.468637 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_1b8d7965-2086-497c-aa14-b8922c56fc65/nova-cell1-conductor-conductor/0.log" Jan 26 17:45:32 crc kubenswrapper[4823]: I0126 17:45:32.541089 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5826af2c-a67e-4848-a374-794b8c905989/nova-cell0-conductor-conductor/0.log" Jan 26 17:45:33 crc kubenswrapper[4823]: I0126 17:45:33.233161 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fb7670ab-2e8e-4af0-a8a6-f8aafbdec117/nova-cell1-novncproxy-novncproxy/0.log" Jan 26 17:45:33 crc kubenswrapper[4823]: I0126 17:45:33.481264 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-2sfsx_df6f1f36-070b-46b4-af52-c113c5f3c5c8/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:33 crc kubenswrapper[4823]: I0126 17:45:33.889522 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_948e5a03-94e3-47a1-a589-0738ba9fec3d/nova-metadata-log/0.log" Jan 26 17:45:35 crc kubenswrapper[4823]: I0126 17:45:35.017040 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d532c0da-749a-4f0c-8157-b79e71b715ac/nova-scheduler-scheduler/0.log" Jan 26 17:45:35 crc kubenswrapper[4823]: I0126 17:45:35.242947 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8ca95455-aa3f-4fa1-a292-d3745005d671/nova-api-log/0.log" Jan 26 17:45:35 crc kubenswrapper[4823]: I0126 17:45:35.450145 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7dd872d0-a323-4968-9e53-37fefc8adc23/mysql-bootstrap/0.log" Jan 26 17:45:35 crc kubenswrapper[4823]: I0126 17:45:35.639387 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7dd872d0-a323-4968-9e53-37fefc8adc23/mysql-bootstrap/0.log" Jan 26 17:45:35 crc kubenswrapper[4823]: I0126 17:45:35.695242 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7dd872d0-a323-4968-9e53-37fefc8adc23/galera/0.log" Jan 26 17:45:35 crc kubenswrapper[4823]: I0126 17:45:35.884305 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_29094f76-d918-4ee5-8064-52c459a4bdce/mysql-bootstrap/0.log" Jan 26 17:45:36 crc kubenswrapper[4823]: I0126 17:45:36.103881 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_29094f76-d918-4ee5-8064-52c459a4bdce/mysql-bootstrap/0.log" Jan 26 17:45:36 crc kubenswrapper[4823]: I0126 17:45:36.202730 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_29094f76-d918-4ee5-8064-52c459a4bdce/galera/0.log" Jan 26 17:45:36 crc kubenswrapper[4823]: I0126 17:45:36.418861 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_63a706b5-54fa-4b1a-a755-04a5a7a52973/openstackclient/0.log" Jan 26 17:45:36 crc kubenswrapper[4823]: I0126 17:45:36.420658 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_8ca95455-aa3f-4fa1-a292-d3745005d671/nova-api-api/0.log" Jan 26 17:45:36 crc kubenswrapper[4823]: I0126 17:45:36.565524 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-bfnxd_f351fa81-8bb3-4b68-9971-f0e5015c60f3/openstack-network-exporter/0.log" Jan 26 17:45:36 crc kubenswrapper[4823]: I0126 17:45:36.725733 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-twc9z_2a39ae8b-f50c-492b-9d4c-308b9b4c87d2/ovsdb-server-init/0.log" Jan 26 17:45:36 crc kubenswrapper[4823]: I0126 17:45:36.929298 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-twc9z_2a39ae8b-f50c-492b-9d4c-308b9b4c87d2/ovsdb-server-init/0.log" Jan 26 17:45:36 crc kubenswrapper[4823]: I0126 17:45:36.983810 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-twc9z_2a39ae8b-f50c-492b-9d4c-308b9b4c87d2/ovsdb-server/0.log" Jan 26 17:45:36 crc kubenswrapper[4823]: I0126 17:45:36.995584 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-twc9z_2a39ae8b-f50c-492b-9d4c-308b9b4c87d2/ovs-vswitchd/0.log" Jan 26 17:45:37 crc kubenswrapper[4823]: I0126 17:45:37.189504 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-s4g2z_366c188c-7e0f-4ac6-8fa6-7a466714d0ea/ovn-controller/0.log" Jan 26 17:45:37 crc kubenswrapper[4823]: I0126 17:45:37.417588 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-bz7sl_540d0393-5844-4d2f-bc69-88a5dd952af0/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:37 crc kubenswrapper[4823]: I0126 17:45:37.492420 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_103958d3-5a75-408a-bcc3-02788016b72e/openstack-network-exporter/0.log" Jan 26 17:45:37 crc kubenswrapper[4823]: I0126 17:45:37.598498 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_103958d3-5a75-408a-bcc3-02788016b72e/ovn-northd/0.log" Jan 26 17:45:37 crc kubenswrapper[4823]: I0126 17:45:37.679592 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a7f3574f-bf6a-45bc-9b87-e519b18bf3dd/openstack-network-exporter/0.log" Jan 26 17:45:37 crc kubenswrapper[4823]: I0126 17:45:37.789427 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a7f3574f-bf6a-45bc-9b87-e519b18bf3dd/ovsdbserver-nb/0.log" Jan 26 17:45:37 crc kubenswrapper[4823]: I0126 17:45:37.883504 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d8fedd21-5444-4125-ac93-dedfe64abef7/openstack-network-exporter/0.log" Jan 26 17:45:37 crc kubenswrapper[4823]: I0126 17:45:37.997469 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_d8fedd21-5444-4125-ac93-dedfe64abef7/ovsdbserver-sb/0.log" Jan 26 17:45:38 crc kubenswrapper[4823]: I0126 17:45:38.615166 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c89e518a-a264-4196-97f5-4614a0b2d59d/setup-container/0.log" Jan 26 17:45:38 crc kubenswrapper[4823]: I0126 17:45:38.764771 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c89e518a-a264-4196-97f5-4614a0b2d59d/setup-container/0.log" Jan 26 17:45:38 crc kubenswrapper[4823]: I0126 17:45:38.901334 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7c575987db-2cpjc_97f649fc-fc2f-4b59-8ed6-0f7c31426519/placement-api/0.log" Jan 26 17:45:38 crc kubenswrapper[4823]: I0126 17:45:38.983805 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c89e518a-a264-4196-97f5-4614a0b2d59d/rabbitmq/0.log" Jan 26 17:45:39 crc kubenswrapper[4823]: I0126 17:45:39.110909 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7c575987db-2cpjc_97f649fc-fc2f-4b59-8ed6-0f7c31426519/placement-log/0.log" Jan 26 17:45:39 crc kubenswrapper[4823]: I0126 17:45:39.180453 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a38fdbff-2641-41d8-9b9b-ad6fe2fd9147/setup-container/0.log" Jan 26 17:45:39 crc kubenswrapper[4823]: I0126 17:45:39.398765 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a38fdbff-2641-41d8-9b9b-ad6fe2fd9147/setup-container/0.log" Jan 26 17:45:39 crc kubenswrapper[4823]: I0126 17:45:39.416441 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a38fdbff-2641-41d8-9b9b-ad6fe2fd9147/rabbitmq/0.log" Jan 26 17:45:39 crc kubenswrapper[4823]: I0126 17:45:39.667104 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bwlgb_dd40af06-84f8-4f72-86b7-ca918c279d1d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:39 crc kubenswrapper[4823]: I0126 17:45:39.677200 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-krggh_8a90f744-fb78-46b3-9b5b-c83e711fafc5/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:39 crc kubenswrapper[4823]: I0126 17:45:39.677726 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_948e5a03-94e3-47a1-a589-0738ba9fec3d/nova-metadata-metadata/0.log" Jan 26 17:45:39 crc kubenswrapper[4823]: I0126 17:45:39.905547 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-j6rkd_fd61fd12-7479-477c-8139-de16026c8868/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:39 crc kubenswrapper[4823]: I0126 17:45:39.918468 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-n542v_9d8764bd-6f30-4b0f-9ada-a051069f288e/ssh-known-hosts-edpm-deployment/0.log" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.159279 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-full_1529ef7b-113d-479f-b4b7-d134a51539e3/tempest-tests-tempest-tests-runner/0.log" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.255327 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-test_61b86bcd-b461-4d98-b3ab-67a1fd95eddc/tempest-tests-tempest-tests-runner/0.log" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.408647 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_288cc5ba-6f03-4b43-aa8a-840ab47267a4/test-operator-logs-container/0.log" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.511547 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-g2rfb_9a238cba-38fe-45bd-b0f4-aca93eb1484b/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.797898 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t6czc"] Jan 26 17:45:40 crc kubenswrapper[4823]: E0126 17:45:40.798308 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf265c76-1547-4a80-bdb3-e3d724cece26" containerName="registry-server" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.798320 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf265c76-1547-4a80-bdb3-e3d724cece26" containerName="registry-server" Jan 26 17:45:40 crc kubenswrapper[4823]: E0126 17:45:40.798328 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf265c76-1547-4a80-bdb3-e3d724cece26" containerName="extract-content" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.798334 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf265c76-1547-4a80-bdb3-e3d724cece26" containerName="extract-content" Jan 26 17:45:40 crc kubenswrapper[4823]: E0126 17:45:40.798341 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="920282f1-0d58-49c1-9ff3-eb722e983305" containerName="collect-profiles" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.798348 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="920282f1-0d58-49c1-9ff3-eb722e983305" containerName="collect-profiles" Jan 26 17:45:40 crc kubenswrapper[4823]: E0126 17:45:40.798420 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf265c76-1547-4a80-bdb3-e3d724cece26" containerName="extract-utilities" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.798427 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf265c76-1547-4a80-bdb3-e3d724cece26" containerName="extract-utilities" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.798593 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="920282f1-0d58-49c1-9ff3-eb722e983305" containerName="collect-profiles" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.798610 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf265c76-1547-4a80-bdb3-e3d724cece26" containerName="registry-server" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.800000 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.810033 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t6czc"] Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.923588 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-catalog-content\") pod \"community-operators-t6czc\" (UID: \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\") " pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.923714 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-utilities\") pod \"community-operators-t6czc\" (UID: \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\") " pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:40 crc kubenswrapper[4823]: I0126 17:45:40.923827 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7jbv\" (UniqueName: \"kubernetes.io/projected/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-kube-api-access-p7jbv\") pod \"community-operators-t6czc\" (UID: \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\") " pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:41 crc kubenswrapper[4823]: I0126 17:45:41.025220 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-catalog-content\") pod \"community-operators-t6czc\" (UID: \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\") " pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:41 crc kubenswrapper[4823]: I0126 17:45:41.025352 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-utilities\") pod \"community-operators-t6czc\" (UID: \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\") " pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:41 crc kubenswrapper[4823]: I0126 17:45:41.025505 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7jbv\" (UniqueName: \"kubernetes.io/projected/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-kube-api-access-p7jbv\") pod \"community-operators-t6czc\" (UID: \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\") " pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:41 crc kubenswrapper[4823]: I0126 17:45:41.025780 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-catalog-content\") pod \"community-operators-t6czc\" (UID: \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\") " pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:41 crc kubenswrapper[4823]: I0126 17:45:41.026096 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-utilities\") pod \"community-operators-t6czc\" (UID: \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\") " pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:41 crc kubenswrapper[4823]: I0126 17:45:41.053926 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7jbv\" (UniqueName: \"kubernetes.io/projected/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-kube-api-access-p7jbv\") pod \"community-operators-t6czc\" (UID: \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\") " pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:41 crc kubenswrapper[4823]: I0126 17:45:41.140978 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:41 crc kubenswrapper[4823]: I0126 17:45:41.735580 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t6czc"] Jan 26 17:45:42 crc kubenswrapper[4823]: I0126 17:45:42.566719 4823 generic.go:334] "Generic (PLEG): container finished" podID="238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" containerID="9956451b7f6a7453dcf9ec2befab89ba58877b59b71068973c6c7cb858f5c492" exitCode=0 Jan 26 17:45:42 crc kubenswrapper[4823]: I0126 17:45:42.568313 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6czc" event={"ID":"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a","Type":"ContainerDied","Data":"9956451b7f6a7453dcf9ec2befab89ba58877b59b71068973c6c7cb858f5c492"} Jan 26 17:45:42 crc kubenswrapper[4823]: I0126 17:45:42.568455 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6czc" event={"ID":"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a","Type":"ContainerStarted","Data":"81e6fe30f078449c8138b5820f6eaf517212a40ac8f44e7db00712b1c2602548"} Jan 26 17:45:42 crc kubenswrapper[4823]: I0126 17:45:42.755269 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_1cbd37a3-3241-4e20-9d2b-c73873212cb1/memcached/0.log" Jan 26 17:45:43 crc kubenswrapper[4823]: I0126 17:45:43.566946 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:45:43 crc kubenswrapper[4823]: E0126 17:45:43.567315 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:45:44 crc kubenswrapper[4823]: I0126 17:45:44.585599 4823 generic.go:334] "Generic (PLEG): container finished" podID="238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" containerID="bf0ff534f8dd2600cf24ad576f1fd4af797e86458c10e14e1571cbcfccfc72d2" exitCode=0 Jan 26 17:45:44 crc kubenswrapper[4823]: I0126 17:45:44.585671 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6czc" event={"ID":"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a","Type":"ContainerDied","Data":"bf0ff534f8dd2600cf24ad576f1fd4af797e86458c10e14e1571cbcfccfc72d2"} Jan 26 17:45:45 crc kubenswrapper[4823]: I0126 17:45:45.596818 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6czc" event={"ID":"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a","Type":"ContainerStarted","Data":"710755e011ed04712024bd4cd5c4440171eb02e0e06af5c25eb5bcafd6a6acd3"} Jan 26 17:45:45 crc kubenswrapper[4823]: I0126 17:45:45.613835 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t6czc" podStartSLOduration=3.166674276 podStartE2EDuration="5.613810705s" podCreationTimestamp="2026-01-26 17:45:40 +0000 UTC" firstStartedPulling="2026-01-26 17:45:42.570960932 +0000 UTC m=+10739.256424037" lastFinishedPulling="2026-01-26 17:45:45.018097361 +0000 UTC m=+10741.703560466" observedRunningTime="2026-01-26 17:45:45.610761962 +0000 UTC m=+10742.296225067" watchObservedRunningTime="2026-01-26 17:45:45.613810705 +0000 UTC m=+10742.299273810" Jan 26 17:45:51 crc kubenswrapper[4823]: I0126 17:45:51.142421 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:51 crc kubenswrapper[4823]: I0126 17:45:51.143057 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:51 crc kubenswrapper[4823]: I0126 17:45:51.196047 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:51 crc kubenswrapper[4823]: I0126 17:45:51.695805 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:51 crc kubenswrapper[4823]: I0126 17:45:51.747820 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t6czc"] Jan 26 17:45:53 crc kubenswrapper[4823]: I0126 17:45:53.661799 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t6czc" podUID="238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" containerName="registry-server" containerID="cri-o://710755e011ed04712024bd4cd5c4440171eb02e0e06af5c25eb5bcafd6a6acd3" gracePeriod=2 Jan 26 17:45:54 crc kubenswrapper[4823]: I0126 17:45:54.671531 4823 generic.go:334] "Generic (PLEG): container finished" podID="238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" containerID="710755e011ed04712024bd4cd5c4440171eb02e0e06af5c25eb5bcafd6a6acd3" exitCode=0 Jan 26 17:45:54 crc kubenswrapper[4823]: I0126 17:45:54.671593 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6czc" event={"ID":"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a","Type":"ContainerDied","Data":"710755e011ed04712024bd4cd5c4440171eb02e0e06af5c25eb5bcafd6a6acd3"} Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.264004 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.414550 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-utilities\") pod \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\" (UID: \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\") " Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.414783 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7jbv\" (UniqueName: \"kubernetes.io/projected/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-kube-api-access-p7jbv\") pod \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\" (UID: \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\") " Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.414881 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-catalog-content\") pod \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\" (UID: \"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a\") " Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.415453 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-utilities" (OuterVolumeSpecName: "utilities") pod "238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" (UID: "238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.415961 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.421520 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-kube-api-access-p7jbv" (OuterVolumeSpecName: "kube-api-access-p7jbv") pod "238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" (UID: "238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a"). InnerVolumeSpecName "kube-api-access-p7jbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.466675 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" (UID: "238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.517991 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7jbv\" (UniqueName: \"kubernetes.io/projected/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-kube-api-access-p7jbv\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.518032 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.681491 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6czc" event={"ID":"238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a","Type":"ContainerDied","Data":"81e6fe30f078449c8138b5820f6eaf517212a40ac8f44e7db00712b1c2602548"} Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.681593 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6czc" Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.681862 4823 scope.go:117] "RemoveContainer" containerID="710755e011ed04712024bd4cd5c4440171eb02e0e06af5c25eb5bcafd6a6acd3" Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.703110 4823 scope.go:117] "RemoveContainer" containerID="bf0ff534f8dd2600cf24ad576f1fd4af797e86458c10e14e1571cbcfccfc72d2" Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.707217 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t6czc"] Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.716961 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t6czc"] Jan 26 17:45:55 crc kubenswrapper[4823]: I0126 17:45:55.723683 4823 scope.go:117] "RemoveContainer" containerID="9956451b7f6a7453dcf9ec2befab89ba58877b59b71068973c6c7cb858f5c492" Jan 26 17:45:56 crc kubenswrapper[4823]: I0126 17:45:56.560778 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:45:56 crc kubenswrapper[4823]: E0126 17:45:56.561088 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:45:57 crc kubenswrapper[4823]: I0126 17:45:57.573418 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" path="/var/lib/kubelet/pods/238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a/volumes" Jan 26 17:46:05 crc kubenswrapper[4823]: I0126 17:46:05.466879 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9_2298b17e-b08f-4710-8417-f795aa095251/util/0.log" Jan 26 17:46:05 crc kubenswrapper[4823]: I0126 17:46:05.467010 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9_2298b17e-b08f-4710-8417-f795aa095251/util/0.log" Jan 26 17:46:05 crc kubenswrapper[4823]: I0126 17:46:05.468649 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9_2298b17e-b08f-4710-8417-f795aa095251/pull/0.log" Jan 26 17:46:05 crc kubenswrapper[4823]: I0126 17:46:05.468740 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9_2298b17e-b08f-4710-8417-f795aa095251/pull/0.log" Jan 26 17:46:05 crc kubenswrapper[4823]: I0126 17:46:05.625347 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9_2298b17e-b08f-4710-8417-f795aa095251/pull/0.log" Jan 26 17:46:05 crc kubenswrapper[4823]: I0126 17:46:05.656714 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9_2298b17e-b08f-4710-8417-f795aa095251/extract/0.log" Jan 26 17:46:05 crc kubenswrapper[4823]: I0126 17:46:05.674933 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_10974b33d5a1f4d8d12d34b88e9bb0f4491bc9862add854564cc7d8c9f6z2x9_2298b17e-b08f-4710-8417-f795aa095251/util/0.log" Jan 26 17:46:05 crc kubenswrapper[4823]: I0126 17:46:05.889039 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-rgql2_bf60542f-f900-4d89-98f4-aeaa7878edda/manager/0.log" Jan 26 17:46:05 crc kubenswrapper[4823]: I0126 17:46:05.932249 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-szbhl_394d042b-9673-4187-8e4a-b479dc07be27/manager/0.log" Jan 26 17:46:06 crc kubenswrapper[4823]: I0126 17:46:06.057204 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-f4qg7_d30f23ef-3901-419c-afd2-bce286e7bb01/manager/0.log" Jan 26 17:46:06 crc kubenswrapper[4823]: I0126 17:46:06.197274 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-s7b2n_c133cb3a-ff1b-4819-90a2-91d0cecb0ed9/manager/0.log" Jan 26 17:46:06 crc kubenswrapper[4823]: I0126 17:46:06.293194 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-l9rwn_038238a3-7348-4fd5-ae41-3473ff6cd14d/manager/0.log" Jan 26 17:46:06 crc kubenswrapper[4823]: I0126 17:46:06.453101 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-p2vfc_7cd351ff-1cb2-417e-9d45-5f16d7dc0a43/manager/0.log" Jan 26 17:46:06 crc kubenswrapper[4823]: I0126 17:46:06.702180 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-lrrrj_5b983d23-dbff-4482-b9fc-6fec60b1ab7f/manager/0.log" Jan 26 17:46:06 crc kubenswrapper[4823]: I0126 17:46:06.838310 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-zcxds_2bc0a30b-01c7-4626-928b-fedcc58e373e/manager/0.log" Jan 26 17:46:06 crc kubenswrapper[4823]: I0126 17:46:06.916447 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-tlwrn_f0f8a8c9-f69c-4eb4-b9cd-5abc6aca4c50/manager/0.log" Jan 26 17:46:06 crc kubenswrapper[4823]: I0126 17:46:06.965342 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-xqx59_0d61828c-0d9d-42d5-8fbe-dea8080b620e/manager/0.log" Jan 26 17:46:07 crc kubenswrapper[4823]: I0126 17:46:07.129395 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-cnsl4_38cf7a4f-36ed-4af7-a896-27f163d35986/manager/0.log" Jan 26 17:46:07 crc kubenswrapper[4823]: I0126 17:46:07.247243 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-mrlhr_16294fad-09f5-4781-83d7-82b25d1bc644/manager/0.log" Jan 26 17:46:07 crc kubenswrapper[4823]: I0126 17:46:07.391104 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-9k7d5_ee032756-312e-4349-842b-f9bc642f7c08/manager/0.log" Jan 26 17:46:07 crc kubenswrapper[4823]: I0126 17:46:07.560753 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-2dzj6_586e8217-d8bb-4d02-bfae-39db746fb0ca/manager/0.log" Jan 26 17:46:07 crc kubenswrapper[4823]: I0126 17:46:07.714347 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854n5tcp_b30af672-528b-4f1d-8bbf-e96085248217/manager/0.log" Jan 26 17:46:07 crc kubenswrapper[4823]: I0126 17:46:07.963445 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-c4b5d4cc7-g5bhl_e13df68d-7c37-42a7-b54f-0d6d248012ad/operator/0.log" Jan 26 17:46:08 crc kubenswrapper[4823]: I0126 17:46:08.091749 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qvd8b_b014fc7e-587f-402b-adb2-2be3c1911e15/registry-server/0.log" Jan 26 17:46:08 crc kubenswrapper[4823]: I0126 17:46:08.330006 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-snjmz_df95f821-a1f5-488a-a730-9c3c2f39fd4c/manager/0.log" Jan 26 17:46:08 crc kubenswrapper[4823]: I0126 17:46:08.507554 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-c56f9_2cdca653-4a4b-4452-9a00-5667349cb42a/manager/0.log" Jan 26 17:46:08 crc kubenswrapper[4823]: I0126 17:46:08.678783 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-2247x_13bd131b-e367-44a0-a552-bf7f2446f6c2/operator/0.log" Jan 26 17:46:08 crc kubenswrapper[4823]: I0126 17:46:08.872731 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-5qs2p_f6145a22-466d-42fa-995e-7e6a8c4ffcc2/manager/0.log" Jan 26 17:46:09 crc kubenswrapper[4823]: I0126 17:46:09.110307 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-85cd9769bb-h4ckq_7534725a-0a1c-4ef0-b5ce-e6b758b4a174/manager/0.log" Jan 26 17:46:09 crc kubenswrapper[4823]: I0126 17:46:09.151738 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-948cd64bd-tpsth_d89101c6-6415-47d7-8e82-65d8a7b3a961/manager/0.log" Jan 26 17:46:09 crc kubenswrapper[4823]: I0126 17:46:09.320953 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-7fc556f645-qgpp5_0e7ff918-aecf-4718-912b-d85f1dbd1799/manager/0.log" Jan 26 17:46:09 crc kubenswrapper[4823]: I0126 17:46:09.359011 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-lltvv_78a7e26b-4eac-4604-82ff-ce393cf816b6/manager/0.log" Jan 26 17:46:11 crc kubenswrapper[4823]: I0126 17:46:11.561273 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:46:11 crc kubenswrapper[4823]: E0126 17:46:11.561548 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:46:26 crc kubenswrapper[4823]: I0126 17:46:26.560645 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:46:26 crc kubenswrapper[4823]: E0126 17:46:26.561611 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:46:29 crc kubenswrapper[4823]: I0126 17:46:29.273917 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-fhzvg_010c3f80-32bc-4a56-b1e9-7503e757192f/control-plane-machine-set-operator/0.log" Jan 26 17:46:29 crc kubenswrapper[4823]: I0126 17:46:29.493467 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4m2g6_204a2df3-b8d7-4998-8ff1-3c3a6112c666/kube-rbac-proxy/0.log" Jan 26 17:46:29 crc kubenswrapper[4823]: I0126 17:46:29.550532 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4m2g6_204a2df3-b8d7-4998-8ff1-3c3a6112c666/machine-api-operator/0.log" Jan 26 17:46:41 crc kubenswrapper[4823]: I0126 17:46:41.359553 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-75hjq_2a5fd3e7-f2f4-484f-9d4b-24e596ed7502/cert-manager-controller/0.log" Jan 26 17:46:41 crc kubenswrapper[4823]: I0126 17:46:41.519211 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-85nz2_ba4d7c7f-36e3-4beb-b4b5-ad3e1cf5542b/cert-manager-cainjector/0.log" Jan 26 17:46:41 crc kubenswrapper[4823]: I0126 17:46:41.563104 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:46:41 crc kubenswrapper[4823]: E0126 17:46:41.563358 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:46:41 crc kubenswrapper[4823]: I0126 17:46:41.607927 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-rdkmv_f48d6b9e-8425-4718-a920-2d2ca2bc5104/cert-manager-webhook/0.log" Jan 26 17:46:52 crc kubenswrapper[4823]: I0126 17:46:52.560486 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:46:52 crc kubenswrapper[4823]: E0126 17:46:52.561298 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:46:53 crc kubenswrapper[4823]: I0126 17:46:53.339978 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-l656m_7451d383-fb90-4543-b142-792890477728/nmstate-console-plugin/0.log" Jan 26 17:46:53 crc kubenswrapper[4823]: I0126 17:46:53.536044 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-2qjs7_05c32998-fc69-48d2-b15a-98654d444a3f/nmstate-handler/0.log" Jan 26 17:46:53 crc kubenswrapper[4823]: I0126 17:46:53.550107 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b9p4g_139e1af7-704a-48d2-86ca-6b05e2307f72/kube-rbac-proxy/0.log" Jan 26 17:46:53 crc kubenswrapper[4823]: I0126 17:46:53.589853 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b9p4g_139e1af7-704a-48d2-86ca-6b05e2307f72/nmstate-metrics/0.log" Jan 26 17:46:53 crc kubenswrapper[4823]: I0126 17:46:53.739628 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-tbvxm_2938d719-ff17-4def-83c8-3c6b49cd6627/nmstate-operator/0.log" Jan 26 17:46:53 crc kubenswrapper[4823]: I0126 17:46:53.825314 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-nrpvd_ac7cc86e-81a5-4a00-95dd-183f1b1ee5dc/nmstate-webhook/0.log" Jan 26 17:47:05 crc kubenswrapper[4823]: I0126 17:47:05.907668 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xm5h4"] Jan 26 17:47:05 crc kubenswrapper[4823]: E0126 17:47:05.909079 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" containerName="extract-utilities" Jan 26 17:47:05 crc kubenswrapper[4823]: I0126 17:47:05.909099 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" containerName="extract-utilities" Jan 26 17:47:05 crc kubenswrapper[4823]: E0126 17:47:05.909139 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" containerName="registry-server" Jan 26 17:47:05 crc kubenswrapper[4823]: I0126 17:47:05.909148 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" containerName="registry-server" Jan 26 17:47:05 crc kubenswrapper[4823]: E0126 17:47:05.909181 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" containerName="extract-content" Jan 26 17:47:05 crc kubenswrapper[4823]: I0126 17:47:05.909189 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" containerName="extract-content" Jan 26 17:47:05 crc kubenswrapper[4823]: I0126 17:47:05.909450 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="238fdbed-cfdf-4ab1-96b8-97cd4ae6df9a" containerName="registry-server" Jan 26 17:47:05 crc kubenswrapper[4823]: I0126 17:47:05.911127 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:05 crc kubenswrapper[4823]: I0126 17:47:05.921903 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xm5h4"] Jan 26 17:47:06 crc kubenswrapper[4823]: I0126 17:47:06.042043 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fa63f99-1bcb-48e8-bdb5-136ed615813b-catalog-content\") pod \"certified-operators-xm5h4\" (UID: \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\") " pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:06 crc kubenswrapper[4823]: I0126 17:47:06.042151 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fa63f99-1bcb-48e8-bdb5-136ed615813b-utilities\") pod \"certified-operators-xm5h4\" (UID: \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\") " pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:06 crc kubenswrapper[4823]: I0126 17:47:06.042255 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnfn2\" (UniqueName: \"kubernetes.io/projected/6fa63f99-1bcb-48e8-bdb5-136ed615813b-kube-api-access-rnfn2\") pod \"certified-operators-xm5h4\" (UID: \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\") " pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:06 crc kubenswrapper[4823]: I0126 17:47:06.144152 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fa63f99-1bcb-48e8-bdb5-136ed615813b-catalog-content\") pod \"certified-operators-xm5h4\" (UID: \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\") " pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:06 crc kubenswrapper[4823]: I0126 17:47:06.144763 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fa63f99-1bcb-48e8-bdb5-136ed615813b-catalog-content\") pod \"certified-operators-xm5h4\" (UID: \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\") " pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:06 crc kubenswrapper[4823]: I0126 17:47:06.145244 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fa63f99-1bcb-48e8-bdb5-136ed615813b-utilities\") pod \"certified-operators-xm5h4\" (UID: \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\") " pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:06 crc kubenswrapper[4823]: I0126 17:47:06.145284 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fa63f99-1bcb-48e8-bdb5-136ed615813b-utilities\") pod \"certified-operators-xm5h4\" (UID: \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\") " pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:06 crc kubenswrapper[4823]: I0126 17:47:06.145484 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnfn2\" (UniqueName: \"kubernetes.io/projected/6fa63f99-1bcb-48e8-bdb5-136ed615813b-kube-api-access-rnfn2\") pod \"certified-operators-xm5h4\" (UID: \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\") " pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:06 crc kubenswrapper[4823]: I0126 17:47:06.170776 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnfn2\" (UniqueName: \"kubernetes.io/projected/6fa63f99-1bcb-48e8-bdb5-136ed615813b-kube-api-access-rnfn2\") pod \"certified-operators-xm5h4\" (UID: \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\") " pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:06 crc kubenswrapper[4823]: I0126 17:47:06.241958 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:06 crc kubenswrapper[4823]: I0126 17:47:06.561629 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:47:06 crc kubenswrapper[4823]: E0126 17:47:06.562154 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:47:06 crc kubenswrapper[4823]: I0126 17:47:06.802711 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xm5h4"] Jan 26 17:47:07 crc kubenswrapper[4823]: I0126 17:47:07.253830 4823 generic.go:334] "Generic (PLEG): container finished" podID="6fa63f99-1bcb-48e8-bdb5-136ed615813b" containerID="44b8a86c947ec956878af1be970de5f5adb3b0fb60c8febe756d85345566bd52" exitCode=0 Jan 26 17:47:07 crc kubenswrapper[4823]: I0126 17:47:07.253882 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm5h4" event={"ID":"6fa63f99-1bcb-48e8-bdb5-136ed615813b","Type":"ContainerDied","Data":"44b8a86c947ec956878af1be970de5f5adb3b0fb60c8febe756d85345566bd52"} Jan 26 17:47:07 crc kubenswrapper[4823]: I0126 17:47:07.253909 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm5h4" event={"ID":"6fa63f99-1bcb-48e8-bdb5-136ed615813b","Type":"ContainerStarted","Data":"39e23a4457355f18fe749123e58322cc7a144b9774a8695dc70d3d55fd31cd93"} Jan 26 17:47:09 crc kubenswrapper[4823]: I0126 17:47:09.270925 4823 generic.go:334] "Generic (PLEG): container finished" podID="6fa63f99-1bcb-48e8-bdb5-136ed615813b" containerID="eebed6a9991360c785ce095282b3b848a2ea4068ab98640d735535b4e6a61143" exitCode=0 Jan 26 17:47:09 crc kubenswrapper[4823]: I0126 17:47:09.270977 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm5h4" event={"ID":"6fa63f99-1bcb-48e8-bdb5-136ed615813b","Type":"ContainerDied","Data":"eebed6a9991360c785ce095282b3b848a2ea4068ab98640d735535b4e6a61143"} Jan 26 17:47:10 crc kubenswrapper[4823]: I0126 17:47:10.281381 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm5h4" event={"ID":"6fa63f99-1bcb-48e8-bdb5-136ed615813b","Type":"ContainerStarted","Data":"984db5d650979f6112f0c2b0e524e8dc5b2a8f7979c8eb2353384b74289b206c"} Jan 26 17:47:11 crc kubenswrapper[4823]: I0126 17:47:11.312866 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xm5h4" podStartSLOduration=3.7326328650000002 podStartE2EDuration="6.312847965s" podCreationTimestamp="2026-01-26 17:47:05 +0000 UTC" firstStartedPulling="2026-01-26 17:47:07.256290004 +0000 UTC m=+10823.941753109" lastFinishedPulling="2026-01-26 17:47:09.836505094 +0000 UTC m=+10826.521968209" observedRunningTime="2026-01-26 17:47:11.308109536 +0000 UTC m=+10827.993572641" watchObservedRunningTime="2026-01-26 17:47:11.312847965 +0000 UTC m=+10827.998311070" Jan 26 17:47:16 crc kubenswrapper[4823]: I0126 17:47:16.242592 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:16 crc kubenswrapper[4823]: I0126 17:47:16.243284 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:16 crc kubenswrapper[4823]: I0126 17:47:16.303892 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:16 crc kubenswrapper[4823]: I0126 17:47:16.384096 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:16 crc kubenswrapper[4823]: I0126 17:47:16.541037 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xm5h4"] Jan 26 17:47:18 crc kubenswrapper[4823]: I0126 17:47:18.353668 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xm5h4" podUID="6fa63f99-1bcb-48e8-bdb5-136ed615813b" containerName="registry-server" containerID="cri-o://984db5d650979f6112f0c2b0e524e8dc5b2a8f7979c8eb2353384b74289b206c" gracePeriod=2 Jan 26 17:47:19 crc kubenswrapper[4823]: I0126 17:47:19.366858 4823 generic.go:334] "Generic (PLEG): container finished" podID="6fa63f99-1bcb-48e8-bdb5-136ed615813b" containerID="984db5d650979f6112f0c2b0e524e8dc5b2a8f7979c8eb2353384b74289b206c" exitCode=0 Jan 26 17:47:19 crc kubenswrapper[4823]: I0126 17:47:19.366932 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm5h4" event={"ID":"6fa63f99-1bcb-48e8-bdb5-136ed615813b","Type":"ContainerDied","Data":"984db5d650979f6112f0c2b0e524e8dc5b2a8f7979c8eb2353384b74289b206c"} Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.020338 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.089504 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-894zb_77e0c22b-572f-4c51-bb37-158f84671365/kube-rbac-proxy/0.log" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.137155 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fa63f99-1bcb-48e8-bdb5-136ed615813b-utilities\") pod \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\" (UID: \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\") " Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.137447 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fa63f99-1bcb-48e8-bdb5-136ed615813b-catalog-content\") pod \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\" (UID: \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\") " Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.137525 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnfn2\" (UniqueName: \"kubernetes.io/projected/6fa63f99-1bcb-48e8-bdb5-136ed615813b-kube-api-access-rnfn2\") pod \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\" (UID: \"6fa63f99-1bcb-48e8-bdb5-136ed615813b\") " Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.138172 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fa63f99-1bcb-48e8-bdb5-136ed615813b-utilities" (OuterVolumeSpecName: "utilities") pod "6fa63f99-1bcb-48e8-bdb5-136ed615813b" (UID: "6fa63f99-1bcb-48e8-bdb5-136ed615813b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.144553 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa63f99-1bcb-48e8-bdb5-136ed615813b-kube-api-access-rnfn2" (OuterVolumeSpecName: "kube-api-access-rnfn2") pod "6fa63f99-1bcb-48e8-bdb5-136ed615813b" (UID: "6fa63f99-1bcb-48e8-bdb5-136ed615813b"). InnerVolumeSpecName "kube-api-access-rnfn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.195865 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fa63f99-1bcb-48e8-bdb5-136ed615813b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6fa63f99-1bcb-48e8-bdb5-136ed615813b" (UID: "6fa63f99-1bcb-48e8-bdb5-136ed615813b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.239488 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6fa63f99-1bcb-48e8-bdb5-136ed615813b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.239518 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnfn2\" (UniqueName: \"kubernetes.io/projected/6fa63f99-1bcb-48e8-bdb5-136ed615813b-kube-api-access-rnfn2\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.239527 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6fa63f99-1bcb-48e8-bdb5-136ed615813b-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.339309 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-894zb_77e0c22b-572f-4c51-bb37-158f84671365/controller/0.log" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.381793 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm5h4" event={"ID":"6fa63f99-1bcb-48e8-bdb5-136ed615813b","Type":"ContainerDied","Data":"39e23a4457355f18fe749123e58322cc7a144b9774a8695dc70d3d55fd31cd93"} Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.381842 4823 scope.go:117] "RemoveContainer" containerID="984db5d650979f6112f0c2b0e524e8dc5b2a8f7979c8eb2353384b74289b206c" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.381981 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm5h4" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.407036 4823 scope.go:117] "RemoveContainer" containerID="eebed6a9991360c785ce095282b3b848a2ea4068ab98640d735535b4e6a61143" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.426048 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xm5h4"] Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.436851 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xm5h4"] Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.442807 4823 scope.go:117] "RemoveContainer" containerID="44b8a86c947ec956878af1be970de5f5adb3b0fb60c8febe756d85345566bd52" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.471341 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/cp-frr-files/0.log" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.560418 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:47:20 crc kubenswrapper[4823]: E0126 17:47:20.560667 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.622859 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/cp-reloader/0.log" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.637781 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/cp-metrics/0.log" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.651840 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/cp-frr-files/0.log" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.681560 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/cp-reloader/0.log" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.876318 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/cp-reloader/0.log" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.888346 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/cp-frr-files/0.log" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.911311 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/cp-metrics/0.log" Jan 26 17:47:20 crc kubenswrapper[4823]: I0126 17:47:20.931203 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/cp-metrics/0.log" Jan 26 17:47:21 crc kubenswrapper[4823]: I0126 17:47:21.062879 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/cp-frr-files/0.log" Jan 26 17:47:21 crc kubenswrapper[4823]: I0126 17:47:21.062863 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/cp-reloader/0.log" Jan 26 17:47:21 crc kubenswrapper[4823]: I0126 17:47:21.075919 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/cp-metrics/0.log" Jan 26 17:47:21 crc kubenswrapper[4823]: I0126 17:47:21.164310 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/controller/0.log" Jan 26 17:47:21 crc kubenswrapper[4823]: I0126 17:47:21.353605 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/kube-rbac-proxy/0.log" Jan 26 17:47:21 crc kubenswrapper[4823]: I0126 17:47:21.363388 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/frr-metrics/0.log" Jan 26 17:47:21 crc kubenswrapper[4823]: I0126 17:47:21.392211 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/kube-rbac-proxy-frr/0.log" Jan 26 17:47:21 crc kubenswrapper[4823]: I0126 17:47:21.574260 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fa63f99-1bcb-48e8-bdb5-136ed615813b" path="/var/lib/kubelet/pods/6fa63f99-1bcb-48e8-bdb5-136ed615813b/volumes" Jan 26 17:47:21 crc kubenswrapper[4823]: I0126 17:47:21.615144 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-bjdsr_90ae223a-8f0d-43c4-afb1-b6de69aebef6/frr-k8s-webhook-server/0.log" Jan 26 17:47:21 crc kubenswrapper[4823]: I0126 17:47:21.648944 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/reloader/0.log" Jan 26 17:47:21 crc kubenswrapper[4823]: I0126 17:47:21.810712 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-754dfc8bcc-pmxg7_591314cb-abf9-43bb-88a6-7ea227f99818/manager/0.log" Jan 26 17:47:22 crc kubenswrapper[4823]: I0126 17:47:22.045585 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5f6c667744-wvxxk_6b5a60ad-91b4-4a1d-8f5d-5208b533d8ec/webhook-server/0.log" Jan 26 17:47:22 crc kubenswrapper[4823]: I0126 17:47:22.114423 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lglsz_e883d469-e238-4d04-958a-6b4d2b0ae8be/kube-rbac-proxy/0.log" Jan 26 17:47:22 crc kubenswrapper[4823]: I0126 17:47:22.830074 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lglsz_e883d469-e238-4d04-958a-6b4d2b0ae8be/speaker/0.log" Jan 26 17:47:23 crc kubenswrapper[4823]: I0126 17:47:23.782966 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-spxlq_ba560662-8eb2-4812-86a4-bf963eb97bf0/frr/0.log" Jan 26 17:47:33 crc kubenswrapper[4823]: I0126 17:47:33.567681 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:47:33 crc kubenswrapper[4823]: E0126 17:47:33.568566 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:47:34 crc kubenswrapper[4823]: I0126 17:47:34.636678 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws_17e6305e-431a-4ea1-8180-84f01a16d2c2/util/0.log" Jan 26 17:47:34 crc kubenswrapper[4823]: I0126 17:47:34.810200 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws_17e6305e-431a-4ea1-8180-84f01a16d2c2/util/0.log" Jan 26 17:47:34 crc kubenswrapper[4823]: I0126 17:47:34.865752 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws_17e6305e-431a-4ea1-8180-84f01a16d2c2/pull/0.log" Jan 26 17:47:34 crc kubenswrapper[4823]: I0126 17:47:34.901345 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws_17e6305e-431a-4ea1-8180-84f01a16d2c2/pull/0.log" Jan 26 17:47:35 crc kubenswrapper[4823]: I0126 17:47:35.085229 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws_17e6305e-431a-4ea1-8180-84f01a16d2c2/util/0.log" Jan 26 17:47:35 crc kubenswrapper[4823]: I0126 17:47:35.101966 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws_17e6305e-431a-4ea1-8180-84f01a16d2c2/extract/0.log" Jan 26 17:47:35 crc kubenswrapper[4823]: I0126 17:47:35.110929 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcfstws_17e6305e-431a-4ea1-8180-84f01a16d2c2/pull/0.log" Jan 26 17:47:35 crc kubenswrapper[4823]: I0126 17:47:35.256827 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm_4e623b4c-fb12-4aa5-a519-0ec22f564425/util/0.log" Jan 26 17:47:35 crc kubenswrapper[4823]: I0126 17:47:35.477724 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm_4e623b4c-fb12-4aa5-a519-0ec22f564425/pull/0.log" Jan 26 17:47:35 crc kubenswrapper[4823]: I0126 17:47:35.485523 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm_4e623b4c-fb12-4aa5-a519-0ec22f564425/util/0.log" Jan 26 17:47:35 crc kubenswrapper[4823]: I0126 17:47:35.498656 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm_4e623b4c-fb12-4aa5-a519-0ec22f564425/pull/0.log" Jan 26 17:47:35 crc kubenswrapper[4823]: I0126 17:47:35.651099 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm_4e623b4c-fb12-4aa5-a519-0ec22f564425/util/0.log" Jan 26 17:47:35 crc kubenswrapper[4823]: I0126 17:47:35.677457 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm_4e623b4c-fb12-4aa5-a519-0ec22f564425/pull/0.log" Jan 26 17:47:35 crc kubenswrapper[4823]: I0126 17:47:35.712307 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713fnccm_4e623b4c-fb12-4aa5-a519-0ec22f564425/extract/0.log" Jan 26 17:47:35 crc kubenswrapper[4823]: I0126 17:47:35.832161 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qngvh_34655fcf-06c6-4e25-89ff-44ae9974fb63/extract-utilities/0.log" Jan 26 17:47:35 crc kubenswrapper[4823]: I0126 17:47:35.970297 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qngvh_34655fcf-06c6-4e25-89ff-44ae9974fb63/extract-utilities/0.log" Jan 26 17:47:36 crc kubenswrapper[4823]: I0126 17:47:36.024886 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qngvh_34655fcf-06c6-4e25-89ff-44ae9974fb63/extract-content/0.log" Jan 26 17:47:36 crc kubenswrapper[4823]: I0126 17:47:36.025253 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qngvh_34655fcf-06c6-4e25-89ff-44ae9974fb63/extract-content/0.log" Jan 26 17:47:36 crc kubenswrapper[4823]: I0126 17:47:36.178693 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qngvh_34655fcf-06c6-4e25-89ff-44ae9974fb63/extract-utilities/0.log" Jan 26 17:47:36 crc kubenswrapper[4823]: I0126 17:47:36.234450 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qngvh_34655fcf-06c6-4e25-89ff-44ae9974fb63/extract-content/0.log" Jan 26 17:47:36 crc kubenswrapper[4823]: I0126 17:47:36.402732 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t2xmr_3c7e61f1-d18b-48f6-a644-bf611d667468/extract-utilities/0.log" Jan 26 17:47:36 crc kubenswrapper[4823]: I0126 17:47:36.573544 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qngvh_34655fcf-06c6-4e25-89ff-44ae9974fb63/registry-server/0.log" Jan 26 17:47:36 crc kubenswrapper[4823]: I0126 17:47:36.599248 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t2xmr_3c7e61f1-d18b-48f6-a644-bf611d667468/extract-content/0.log" Jan 26 17:47:36 crc kubenswrapper[4823]: I0126 17:47:36.676769 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t2xmr_3c7e61f1-d18b-48f6-a644-bf611d667468/extract-utilities/0.log" Jan 26 17:47:36 crc kubenswrapper[4823]: I0126 17:47:36.705893 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t2xmr_3c7e61f1-d18b-48f6-a644-bf611d667468/extract-content/0.log" Jan 26 17:47:36 crc kubenswrapper[4823]: I0126 17:47:36.794294 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t2xmr_3c7e61f1-d18b-48f6-a644-bf611d667468/extract-content/0.log" Jan 26 17:47:36 crc kubenswrapper[4823]: I0126 17:47:36.808770 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t2xmr_3c7e61f1-d18b-48f6-a644-bf611d667468/extract-utilities/0.log" Jan 26 17:47:36 crc kubenswrapper[4823]: I0126 17:47:36.954506 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-bjnqv_46f1bb1c-25a5-495b-b871-28c248efe429/marketplace-operator/0.log" Jan 26 17:47:37 crc kubenswrapper[4823]: I0126 17:47:37.138202 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v8676_5205f998-5201-4e3b-bb0a-eb744eb50637/extract-utilities/0.log" Jan 26 17:47:37 crc kubenswrapper[4823]: I0126 17:47:37.392530 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v8676_5205f998-5201-4e3b-bb0a-eb744eb50637/extract-content/0.log" Jan 26 17:47:37 crc kubenswrapper[4823]: I0126 17:47:37.401436 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v8676_5205f998-5201-4e3b-bb0a-eb744eb50637/extract-utilities/0.log" Jan 26 17:47:37 crc kubenswrapper[4823]: I0126 17:47:37.427677 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v8676_5205f998-5201-4e3b-bb0a-eb744eb50637/extract-content/0.log" Jan 26 17:47:37 crc kubenswrapper[4823]: I0126 17:47:37.599609 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v8676_5205f998-5201-4e3b-bb0a-eb744eb50637/extract-utilities/0.log" Jan 26 17:47:37 crc kubenswrapper[4823]: I0126 17:47:37.679222 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v8676_5205f998-5201-4e3b-bb0a-eb744eb50637/extract-content/0.log" Jan 26 17:47:37 crc kubenswrapper[4823]: I0126 17:47:37.891966 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-v8676_5205f998-5201-4e3b-bb0a-eb744eb50637/registry-server/0.log" Jan 26 17:47:37 crc kubenswrapper[4823]: I0126 17:47:37.906092 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lg88q_893e9991-2af7-4fd4-842d-70aa260ff39a/extract-utilities/0.log" Jan 26 17:47:38 crc kubenswrapper[4823]: I0126 17:47:38.087307 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lg88q_893e9991-2af7-4fd4-842d-70aa260ff39a/extract-utilities/0.log" Jan 26 17:47:38 crc kubenswrapper[4823]: I0126 17:47:38.127316 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lg88q_893e9991-2af7-4fd4-842d-70aa260ff39a/extract-content/0.log" Jan 26 17:47:38 crc kubenswrapper[4823]: I0126 17:47:38.182761 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lg88q_893e9991-2af7-4fd4-842d-70aa260ff39a/extract-content/0.log" Jan 26 17:47:38 crc kubenswrapper[4823]: I0126 17:47:38.397753 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lg88q_893e9991-2af7-4fd4-842d-70aa260ff39a/extract-utilities/0.log" Jan 26 17:47:38 crc kubenswrapper[4823]: I0126 17:47:38.398058 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lg88q_893e9991-2af7-4fd4-842d-70aa260ff39a/extract-content/0.log" Jan 26 17:47:38 crc kubenswrapper[4823]: I0126 17:47:38.972200 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-t2xmr_3c7e61f1-d18b-48f6-a644-bf611d667468/registry-server/0.log" Jan 26 17:47:40 crc kubenswrapper[4823]: I0126 17:47:40.083223 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-lg88q_893e9991-2af7-4fd4-842d-70aa260ff39a/registry-server/0.log" Jan 26 17:47:47 crc kubenswrapper[4823]: I0126 17:47:47.560969 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:47:47 crc kubenswrapper[4823]: E0126 17:47:47.561676 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:48:01 crc kubenswrapper[4823]: E0126 17:48:01.040116 4823 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.106:45204->38.102.83.106:32927: write tcp 38.102.83.106:45204->38.102.83.106:32927: write: broken pipe Jan 26 17:48:02 crc kubenswrapper[4823]: I0126 17:48:02.561499 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:48:02 crc kubenswrapper[4823]: E0126 17:48:02.561965 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:48:16 crc kubenswrapper[4823]: I0126 17:48:16.561096 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:48:16 crc kubenswrapper[4823]: E0126 17:48:16.561873 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:48:31 crc kubenswrapper[4823]: I0126 17:48:31.560742 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:48:31 crc kubenswrapper[4823]: E0126 17:48:31.561591 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:48:42 crc kubenswrapper[4823]: I0126 17:48:42.562109 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:48:42 crc kubenswrapper[4823]: E0126 17:48:42.563016 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:48:56 crc kubenswrapper[4823]: I0126 17:48:56.560009 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:48:56 crc kubenswrapper[4823]: E0126 17:48:56.560830 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:49:11 crc kubenswrapper[4823]: I0126 17:49:11.561076 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:49:11 crc kubenswrapper[4823]: E0126 17:49:11.562037 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:49:24 crc kubenswrapper[4823]: I0126 17:49:24.560918 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:49:24 crc kubenswrapper[4823]: E0126 17:49:24.561754 4823 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kv6z2_openshift-machine-config-operator(1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d)\"" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" Jan 26 17:49:37 crc kubenswrapper[4823]: I0126 17:49:37.560523 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:49:38 crc kubenswrapper[4823]: I0126 17:49:38.628352 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"5751c026f03cf361b8cdad7ec44fe16b83b03ad8d78e6ce46e74928f4b8342a8"} Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.750216 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8jwhl"] Jan 26 17:49:42 crc kubenswrapper[4823]: E0126 17:49:42.750940 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa63f99-1bcb-48e8-bdb5-136ed615813b" containerName="extract-content" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.750956 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa63f99-1bcb-48e8-bdb5-136ed615813b" containerName="extract-content" Jan 26 17:49:42 crc kubenswrapper[4823]: E0126 17:49:42.750981 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa63f99-1bcb-48e8-bdb5-136ed615813b" containerName="extract-utilities" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.750987 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa63f99-1bcb-48e8-bdb5-136ed615813b" containerName="extract-utilities" Jan 26 17:49:42 crc kubenswrapper[4823]: E0126 17:49:42.751012 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fa63f99-1bcb-48e8-bdb5-136ed615813b" containerName="registry-server" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.751020 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa63f99-1bcb-48e8-bdb5-136ed615813b" containerName="registry-server" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.751231 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fa63f99-1bcb-48e8-bdb5-136ed615813b" containerName="registry-server" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.752969 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.764117 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8jwhl"] Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.806741 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9tfk\" (UniqueName: \"kubernetes.io/projected/7396f2e4-d114-4595-9b11-784b659b309e-kube-api-access-l9tfk\") pod \"redhat-operators-8jwhl\" (UID: \"7396f2e4-d114-4595-9b11-784b659b309e\") " pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.806953 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7396f2e4-d114-4595-9b11-784b659b309e-catalog-content\") pod \"redhat-operators-8jwhl\" (UID: \"7396f2e4-d114-4595-9b11-784b659b309e\") " pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.807033 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7396f2e4-d114-4595-9b11-784b659b309e-utilities\") pod \"redhat-operators-8jwhl\" (UID: \"7396f2e4-d114-4595-9b11-784b659b309e\") " pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.908719 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9tfk\" (UniqueName: \"kubernetes.io/projected/7396f2e4-d114-4595-9b11-784b659b309e-kube-api-access-l9tfk\") pod \"redhat-operators-8jwhl\" (UID: \"7396f2e4-d114-4595-9b11-784b659b309e\") " pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.909233 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7396f2e4-d114-4595-9b11-784b659b309e-catalog-content\") pod \"redhat-operators-8jwhl\" (UID: \"7396f2e4-d114-4595-9b11-784b659b309e\") " pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.909297 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7396f2e4-d114-4595-9b11-784b659b309e-utilities\") pod \"redhat-operators-8jwhl\" (UID: \"7396f2e4-d114-4595-9b11-784b659b309e\") " pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.910168 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7396f2e4-d114-4595-9b11-784b659b309e-catalog-content\") pod \"redhat-operators-8jwhl\" (UID: \"7396f2e4-d114-4595-9b11-784b659b309e\") " pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.910167 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7396f2e4-d114-4595-9b11-784b659b309e-utilities\") pod \"redhat-operators-8jwhl\" (UID: \"7396f2e4-d114-4595-9b11-784b659b309e\") " pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:49:42 crc kubenswrapper[4823]: I0126 17:49:42.928611 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9tfk\" (UniqueName: \"kubernetes.io/projected/7396f2e4-d114-4595-9b11-784b659b309e-kube-api-access-l9tfk\") pod \"redhat-operators-8jwhl\" (UID: \"7396f2e4-d114-4595-9b11-784b659b309e\") " pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:49:43 crc kubenswrapper[4823]: I0126 17:49:43.095818 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:49:43 crc kubenswrapper[4823]: I0126 17:49:43.631238 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8jwhl"] Jan 26 17:49:43 crc kubenswrapper[4823]: I0126 17:49:43.666523 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwhl" event={"ID":"7396f2e4-d114-4595-9b11-784b659b309e","Type":"ContainerStarted","Data":"74946c2c34b761f57dac23246d11634e9f6e5b94a3fa70f82ab8c9588af7ed2f"} Jan 26 17:49:44 crc kubenswrapper[4823]: I0126 17:49:44.678866 4823 generic.go:334] "Generic (PLEG): container finished" podID="7396f2e4-d114-4595-9b11-784b659b309e" containerID="462a3bc3b1dda5147ba1a7cfb2168bf1f42839f1b42c6098cec3588b1bbd3782" exitCode=0 Jan 26 17:49:44 crc kubenswrapper[4823]: I0126 17:49:44.678991 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwhl" event={"ID":"7396f2e4-d114-4595-9b11-784b659b309e","Type":"ContainerDied","Data":"462a3bc3b1dda5147ba1a7cfb2168bf1f42839f1b42c6098cec3588b1bbd3782"} Jan 26 17:49:46 crc kubenswrapper[4823]: I0126 17:49:46.706509 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwhl" event={"ID":"7396f2e4-d114-4595-9b11-784b659b309e","Type":"ContainerStarted","Data":"af067cd2b0ebdd27a4543f466d1e2ea8d72342dbcececba0c90b30b3191e63b6"} Jan 26 17:49:53 crc kubenswrapper[4823]: I0126 17:49:53.777063 4823 generic.go:334] "Generic (PLEG): container finished" podID="7396f2e4-d114-4595-9b11-784b659b309e" containerID="af067cd2b0ebdd27a4543f466d1e2ea8d72342dbcececba0c90b30b3191e63b6" exitCode=0 Jan 26 17:49:53 crc kubenswrapper[4823]: I0126 17:49:53.777682 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwhl" event={"ID":"7396f2e4-d114-4595-9b11-784b659b309e","Type":"ContainerDied","Data":"af067cd2b0ebdd27a4543f466d1e2ea8d72342dbcececba0c90b30b3191e63b6"} Jan 26 17:49:53 crc kubenswrapper[4823]: I0126 17:49:53.783472 4823 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 26 17:49:54 crc kubenswrapper[4823]: I0126 17:49:54.790832 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwhl" event={"ID":"7396f2e4-d114-4595-9b11-784b659b309e","Type":"ContainerStarted","Data":"4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28"} Jan 26 17:49:54 crc kubenswrapper[4823]: I0126 17:49:54.821827 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8jwhl" podStartSLOduration=3.198598661 podStartE2EDuration="12.821802565s" podCreationTimestamp="2026-01-26 17:49:42 +0000 UTC" firstStartedPulling="2026-01-26 17:49:44.681187744 +0000 UTC m=+10981.366650849" lastFinishedPulling="2026-01-26 17:49:54.304391648 +0000 UTC m=+10990.989854753" observedRunningTime="2026-01-26 17:49:54.813493278 +0000 UTC m=+10991.498956413" watchObservedRunningTime="2026-01-26 17:49:54.821802565 +0000 UTC m=+10991.507265680" Jan 26 17:50:03 crc kubenswrapper[4823]: I0126 17:50:03.096203 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:50:03 crc kubenswrapper[4823]: I0126 17:50:03.098018 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:50:03 crc kubenswrapper[4823]: I0126 17:50:03.150344 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:50:03 crc kubenswrapper[4823]: I0126 17:50:03.910572 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:50:03 crc kubenswrapper[4823]: I0126 17:50:03.961492 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8jwhl"] Jan 26 17:50:05 crc kubenswrapper[4823]: I0126 17:50:05.881778 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8jwhl" podUID="7396f2e4-d114-4595-9b11-784b659b309e" containerName="registry-server" containerID="cri-o://4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28" gracePeriod=2 Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.498538 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.626108 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7396f2e4-d114-4595-9b11-784b659b309e-utilities\") pod \"7396f2e4-d114-4595-9b11-784b659b309e\" (UID: \"7396f2e4-d114-4595-9b11-784b659b309e\") " Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.626192 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7396f2e4-d114-4595-9b11-784b659b309e-catalog-content\") pod \"7396f2e4-d114-4595-9b11-784b659b309e\" (UID: \"7396f2e4-d114-4595-9b11-784b659b309e\") " Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.626422 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9tfk\" (UniqueName: \"kubernetes.io/projected/7396f2e4-d114-4595-9b11-784b659b309e-kube-api-access-l9tfk\") pod \"7396f2e4-d114-4595-9b11-784b659b309e\" (UID: \"7396f2e4-d114-4595-9b11-784b659b309e\") " Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.627107 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7396f2e4-d114-4595-9b11-784b659b309e-utilities" (OuterVolumeSpecName: "utilities") pod "7396f2e4-d114-4595-9b11-784b659b309e" (UID: "7396f2e4-d114-4595-9b11-784b659b309e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.628317 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7396f2e4-d114-4595-9b11-784b659b309e-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.632031 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7396f2e4-d114-4595-9b11-784b659b309e-kube-api-access-l9tfk" (OuterVolumeSpecName: "kube-api-access-l9tfk") pod "7396f2e4-d114-4595-9b11-784b659b309e" (UID: "7396f2e4-d114-4595-9b11-784b659b309e"). InnerVolumeSpecName "kube-api-access-l9tfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.731015 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9tfk\" (UniqueName: \"kubernetes.io/projected/7396f2e4-d114-4595-9b11-784b659b309e-kube-api-access-l9tfk\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.749987 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7396f2e4-d114-4595-9b11-784b659b309e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7396f2e4-d114-4595-9b11-784b659b309e" (UID: "7396f2e4-d114-4595-9b11-784b659b309e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.833145 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7396f2e4-d114-4595-9b11-784b659b309e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.891094 4823 generic.go:334] "Generic (PLEG): container finished" podID="7396f2e4-d114-4595-9b11-784b659b309e" containerID="4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28" exitCode=0 Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.891145 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwhl" event={"ID":"7396f2e4-d114-4595-9b11-784b659b309e","Type":"ContainerDied","Data":"4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28"} Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.891177 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8jwhl" event={"ID":"7396f2e4-d114-4595-9b11-784b659b309e","Type":"ContainerDied","Data":"74946c2c34b761f57dac23246d11634e9f6e5b94a3fa70f82ab8c9588af7ed2f"} Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.891197 4823 scope.go:117] "RemoveContainer" containerID="4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28" Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.891352 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8jwhl" Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.924111 4823 scope.go:117] "RemoveContainer" containerID="af067cd2b0ebdd27a4543f466d1e2ea8d72342dbcececba0c90b30b3191e63b6" Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.936182 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8jwhl"] Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.944628 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8jwhl"] Jan 26 17:50:06 crc kubenswrapper[4823]: I0126 17:50:06.961985 4823 scope.go:117] "RemoveContainer" containerID="462a3bc3b1dda5147ba1a7cfb2168bf1f42839f1b42c6098cec3588b1bbd3782" Jan 26 17:50:07 crc kubenswrapper[4823]: I0126 17:50:07.011380 4823 scope.go:117] "RemoveContainer" containerID="4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28" Jan 26 17:50:07 crc kubenswrapper[4823]: E0126 17:50:07.011920 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28\": container with ID starting with 4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28 not found: ID does not exist" containerID="4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28" Jan 26 17:50:07 crc kubenswrapper[4823]: I0126 17:50:07.011988 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28"} err="failed to get container status \"4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28\": rpc error: code = NotFound desc = could not find container \"4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28\": container with ID starting with 4e5d449a599acece61be53bfc20755d8ef2c54e8d67f8496c886e687805f8a28 not found: ID does not exist" Jan 26 17:50:07 crc kubenswrapper[4823]: I0126 17:50:07.012037 4823 scope.go:117] "RemoveContainer" containerID="af067cd2b0ebdd27a4543f466d1e2ea8d72342dbcececba0c90b30b3191e63b6" Jan 26 17:50:07 crc kubenswrapper[4823]: E0126 17:50:07.012608 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af067cd2b0ebdd27a4543f466d1e2ea8d72342dbcececba0c90b30b3191e63b6\": container with ID starting with af067cd2b0ebdd27a4543f466d1e2ea8d72342dbcececba0c90b30b3191e63b6 not found: ID does not exist" containerID="af067cd2b0ebdd27a4543f466d1e2ea8d72342dbcececba0c90b30b3191e63b6" Jan 26 17:50:07 crc kubenswrapper[4823]: I0126 17:50:07.012653 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af067cd2b0ebdd27a4543f466d1e2ea8d72342dbcececba0c90b30b3191e63b6"} err="failed to get container status \"af067cd2b0ebdd27a4543f466d1e2ea8d72342dbcececba0c90b30b3191e63b6\": rpc error: code = NotFound desc = could not find container \"af067cd2b0ebdd27a4543f466d1e2ea8d72342dbcececba0c90b30b3191e63b6\": container with ID starting with af067cd2b0ebdd27a4543f466d1e2ea8d72342dbcececba0c90b30b3191e63b6 not found: ID does not exist" Jan 26 17:50:07 crc kubenswrapper[4823]: I0126 17:50:07.012683 4823 scope.go:117] "RemoveContainer" containerID="462a3bc3b1dda5147ba1a7cfb2168bf1f42839f1b42c6098cec3588b1bbd3782" Jan 26 17:50:07 crc kubenswrapper[4823]: E0126 17:50:07.013217 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"462a3bc3b1dda5147ba1a7cfb2168bf1f42839f1b42c6098cec3588b1bbd3782\": container with ID starting with 462a3bc3b1dda5147ba1a7cfb2168bf1f42839f1b42c6098cec3588b1bbd3782 not found: ID does not exist" containerID="462a3bc3b1dda5147ba1a7cfb2168bf1f42839f1b42c6098cec3588b1bbd3782" Jan 26 17:50:07 crc kubenswrapper[4823]: I0126 17:50:07.013254 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"462a3bc3b1dda5147ba1a7cfb2168bf1f42839f1b42c6098cec3588b1bbd3782"} err="failed to get container status \"462a3bc3b1dda5147ba1a7cfb2168bf1f42839f1b42c6098cec3588b1bbd3782\": rpc error: code = NotFound desc = could not find container \"462a3bc3b1dda5147ba1a7cfb2168bf1f42839f1b42c6098cec3588b1bbd3782\": container with ID starting with 462a3bc3b1dda5147ba1a7cfb2168bf1f42839f1b42c6098cec3588b1bbd3782 not found: ID does not exist" Jan 26 17:50:07 crc kubenswrapper[4823]: I0126 17:50:07.571207 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7396f2e4-d114-4595-9b11-784b659b309e" path="/var/lib/kubelet/pods/7396f2e4-d114-4595-9b11-784b659b309e/volumes" Jan 26 17:50:07 crc kubenswrapper[4823]: I0126 17:50:07.902069 4823 generic.go:334] "Generic (PLEG): container finished" podID="8542386b-e3e4-47b9-ad0f-aea78951dd82" containerID="47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4" exitCode=0 Jan 26 17:50:07 crc kubenswrapper[4823]: I0126 17:50:07.902251 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ckm7w/must-gather-jqwb8" event={"ID":"8542386b-e3e4-47b9-ad0f-aea78951dd82","Type":"ContainerDied","Data":"47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4"} Jan 26 17:50:07 crc kubenswrapper[4823]: I0126 17:50:07.903168 4823 scope.go:117] "RemoveContainer" containerID="47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4" Jan 26 17:50:08 crc kubenswrapper[4823]: I0126 17:50:08.239438 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ckm7w_must-gather-jqwb8_8542386b-e3e4-47b9-ad0f-aea78951dd82/gather/0.log" Jan 26 17:50:11 crc kubenswrapper[4823]: I0126 17:50:11.048241 4823 scope.go:117] "RemoveContainer" containerID="3581e99b11133badcf835eb6dc37cfe77d0845ef9b729cc198e1aa95e3593170" Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.015047 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ckm7w/must-gather-jqwb8"] Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.015770 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ckm7w/must-gather-jqwb8" podUID="8542386b-e3e4-47b9-ad0f-aea78951dd82" containerName="copy" containerID="cri-o://948c5cdd0015dcd0d469e7c7ef3668f57b62627909bd9c38bf7492eeb13177a7" gracePeriod=2 Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.035737 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ckm7w/must-gather-jqwb8"] Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.681739 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ckm7w_must-gather-jqwb8_8542386b-e3e4-47b9-ad0f-aea78951dd82/copy/0.log" Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.682683 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/must-gather-jqwb8" Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.782769 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8542386b-e3e4-47b9-ad0f-aea78951dd82-must-gather-output\") pod \"8542386b-e3e4-47b9-ad0f-aea78951dd82\" (UID: \"8542386b-e3e4-47b9-ad0f-aea78951dd82\") " Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.783017 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnhf7\" (UniqueName: \"kubernetes.io/projected/8542386b-e3e4-47b9-ad0f-aea78951dd82-kube-api-access-hnhf7\") pod \"8542386b-e3e4-47b9-ad0f-aea78951dd82\" (UID: \"8542386b-e3e4-47b9-ad0f-aea78951dd82\") " Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.790858 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8542386b-e3e4-47b9-ad0f-aea78951dd82-kube-api-access-hnhf7" (OuterVolumeSpecName: "kube-api-access-hnhf7") pod "8542386b-e3e4-47b9-ad0f-aea78951dd82" (UID: "8542386b-e3e4-47b9-ad0f-aea78951dd82"). InnerVolumeSpecName "kube-api-access-hnhf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.885800 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnhf7\" (UniqueName: \"kubernetes.io/projected/8542386b-e3e4-47b9-ad0f-aea78951dd82-kube-api-access-hnhf7\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.994902 4823 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ckm7w_must-gather-jqwb8_8542386b-e3e4-47b9-ad0f-aea78951dd82/copy/0.log" Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.995058 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8542386b-e3e4-47b9-ad0f-aea78951dd82-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8542386b-e3e4-47b9-ad0f-aea78951dd82" (UID: "8542386b-e3e4-47b9-ad0f-aea78951dd82"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.995694 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ckm7w/must-gather-jqwb8" Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.995795 4823 scope.go:117] "RemoveContainer" containerID="948c5cdd0015dcd0d469e7c7ef3668f57b62627909bd9c38bf7492eeb13177a7" Jan 26 17:50:17 crc kubenswrapper[4823]: I0126 17:50:17.995600 4823 generic.go:334] "Generic (PLEG): container finished" podID="8542386b-e3e4-47b9-ad0f-aea78951dd82" containerID="948c5cdd0015dcd0d469e7c7ef3668f57b62627909bd9c38bf7492eeb13177a7" exitCode=143 Jan 26 17:50:18 crc kubenswrapper[4823]: I0126 17:50:18.020044 4823 scope.go:117] "RemoveContainer" containerID="47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4" Jan 26 17:50:18 crc kubenswrapper[4823]: I0126 17:50:18.090003 4823 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8542386b-e3e4-47b9-ad0f-aea78951dd82-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 26 17:50:18 crc kubenswrapper[4823]: I0126 17:50:18.097334 4823 scope.go:117] "RemoveContainer" containerID="948c5cdd0015dcd0d469e7c7ef3668f57b62627909bd9c38bf7492eeb13177a7" Jan 26 17:50:18 crc kubenswrapper[4823]: E0126 17:50:18.098129 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"948c5cdd0015dcd0d469e7c7ef3668f57b62627909bd9c38bf7492eeb13177a7\": container with ID starting with 948c5cdd0015dcd0d469e7c7ef3668f57b62627909bd9c38bf7492eeb13177a7 not found: ID does not exist" containerID="948c5cdd0015dcd0d469e7c7ef3668f57b62627909bd9c38bf7492eeb13177a7" Jan 26 17:50:18 crc kubenswrapper[4823]: I0126 17:50:18.098170 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"948c5cdd0015dcd0d469e7c7ef3668f57b62627909bd9c38bf7492eeb13177a7"} err="failed to get container status \"948c5cdd0015dcd0d469e7c7ef3668f57b62627909bd9c38bf7492eeb13177a7\": rpc error: code = NotFound desc = could not find container \"948c5cdd0015dcd0d469e7c7ef3668f57b62627909bd9c38bf7492eeb13177a7\": container with ID starting with 948c5cdd0015dcd0d469e7c7ef3668f57b62627909bd9c38bf7492eeb13177a7 not found: ID does not exist" Jan 26 17:50:18 crc kubenswrapper[4823]: I0126 17:50:18.098197 4823 scope.go:117] "RemoveContainer" containerID="47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4" Jan 26 17:50:18 crc kubenswrapper[4823]: E0126 17:50:18.098518 4823 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4\": container with ID starting with 47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4 not found: ID does not exist" containerID="47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4" Jan 26 17:50:18 crc kubenswrapper[4823]: I0126 17:50:18.098553 4823 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4"} err="failed to get container status \"47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4\": rpc error: code = NotFound desc = could not find container \"47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4\": container with ID starting with 47ef3ce380e2ada2adb27c4d7a805beaac220a8865f652e5db40e7e1f7ab97c4 not found: ID does not exist" Jan 26 17:50:19 crc kubenswrapper[4823]: I0126 17:50:19.571857 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8542386b-e3e4-47b9-ad0f-aea78951dd82" path="/var/lib/kubelet/pods/8542386b-e3e4-47b9-ad0f-aea78951dd82/volumes" Jan 26 17:50:49 crc kubenswrapper[4823]: E0126 17:50:49.567482 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/systemd-hostnamed.service\": RecentStats: unable to find data in memory cache]" Jan 26 17:51:11 crc kubenswrapper[4823]: I0126 17:51:11.124807 4823 scope.go:117] "RemoveContainer" containerID="18635684e1200681cf992b24e2702ad43a58ec4ba92a521eae2f133accb270fd" Jan 26 17:51:11 crc kubenswrapper[4823]: I0126 17:51:11.155305 4823 scope.go:117] "RemoveContainer" containerID="811b749b18e83a92a6f668a4ad7f3b51448e251c487c3185116f3df9a6a74f4f" Jan 26 17:51:11 crc kubenswrapper[4823]: I0126 17:51:11.196530 4823 scope.go:117] "RemoveContainer" containerID="b3a02ed33de1aeadfe0129346b0b44d745ef53011fbabaf727544cc66af294e9" Jan 26 17:51:11 crc kubenswrapper[4823]: I0126 17:51:11.216468 4823 scope.go:117] "RemoveContainer" containerID="1c8fcc0ff6a6cebbe159dcc2ff80b60519cb22cbd5d562ec49a7e00381af10ce" Jan 26 17:52:04 crc kubenswrapper[4823]: I0126 17:52:04.508158 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:52:04 crc kubenswrapper[4823]: I0126 17:52:04.508679 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:52:34 crc kubenswrapper[4823]: I0126 17:52:34.508851 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:52:34 crc kubenswrapper[4823]: I0126 17:52:34.509488 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:53:04 crc kubenswrapper[4823]: I0126 17:53:04.508624 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:53:04 crc kubenswrapper[4823]: I0126 17:53:04.509207 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:53:04 crc kubenswrapper[4823]: I0126 17:53:04.509260 4823 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" Jan 26 17:53:04 crc kubenswrapper[4823]: I0126 17:53:04.510076 4823 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5751c026f03cf361b8cdad7ec44fe16b83b03ad8d78e6ce46e74928f4b8342a8"} pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 26 17:53:04 crc kubenswrapper[4823]: I0126 17:53:04.510137 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" containerID="cri-o://5751c026f03cf361b8cdad7ec44fe16b83b03ad8d78e6ce46e74928f4b8342a8" gracePeriod=600 Jan 26 17:53:05 crc kubenswrapper[4823]: I0126 17:53:05.529624 4823 generic.go:334] "Generic (PLEG): container finished" podID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerID="5751c026f03cf361b8cdad7ec44fe16b83b03ad8d78e6ce46e74928f4b8342a8" exitCode=0 Jan 26 17:53:05 crc kubenswrapper[4823]: I0126 17:53:05.529888 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerDied","Data":"5751c026f03cf361b8cdad7ec44fe16b83b03ad8d78e6ce46e74928f4b8342a8"} Jan 26 17:53:05 crc kubenswrapper[4823]: I0126 17:53:05.530229 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" event={"ID":"1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d","Type":"ContainerStarted","Data":"97bbaf1f814e6334da674c808478a75943d7e7ae87b0499c8634e127bd21e15f"} Jan 26 17:53:05 crc kubenswrapper[4823]: I0126 17:53:05.530251 4823 scope.go:117] "RemoveContainer" containerID="9cb426491da85fe2a56ec337e6b7e435d502d004b15502cb97cb545829db95b6" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.072102 4823 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qtjfl"] Jan 26 17:54:51 crc kubenswrapper[4823]: E0126 17:54:51.074352 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8542386b-e3e4-47b9-ad0f-aea78951dd82" containerName="gather" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.074489 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8542386b-e3e4-47b9-ad0f-aea78951dd82" containerName="gather" Jan 26 17:54:51 crc kubenswrapper[4823]: E0126 17:54:51.074577 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7396f2e4-d114-4595-9b11-784b659b309e" containerName="registry-server" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.074647 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7396f2e4-d114-4595-9b11-784b659b309e" containerName="registry-server" Jan 26 17:54:51 crc kubenswrapper[4823]: E0126 17:54:51.074738 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7396f2e4-d114-4595-9b11-784b659b309e" containerName="extract-content" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.074807 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7396f2e4-d114-4595-9b11-784b659b309e" containerName="extract-content" Jan 26 17:54:51 crc kubenswrapper[4823]: E0126 17:54:51.074901 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8542386b-e3e4-47b9-ad0f-aea78951dd82" containerName="copy" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.074967 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="8542386b-e3e4-47b9-ad0f-aea78951dd82" containerName="copy" Jan 26 17:54:51 crc kubenswrapper[4823]: E0126 17:54:51.075040 4823 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7396f2e4-d114-4595-9b11-784b659b309e" containerName="extract-utilities" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.075116 4823 state_mem.go:107] "Deleted CPUSet assignment" podUID="7396f2e4-d114-4595-9b11-784b659b309e" containerName="extract-utilities" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.075431 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8542386b-e3e4-47b9-ad0f-aea78951dd82" containerName="copy" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.075525 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="8542386b-e3e4-47b9-ad0f-aea78951dd82" containerName="gather" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.075610 4823 memory_manager.go:354] "RemoveStaleState removing state" podUID="7396f2e4-d114-4595-9b11-784b659b309e" containerName="registry-server" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.077386 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.098206 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtjfl"] Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.185661 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15aafcd2-7b44-43db-867a-aff2070543a6-catalog-content\") pod \"redhat-marketplace-qtjfl\" (UID: \"15aafcd2-7b44-43db-867a-aff2070543a6\") " pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.185718 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15aafcd2-7b44-43db-867a-aff2070543a6-utilities\") pod \"redhat-marketplace-qtjfl\" (UID: \"15aafcd2-7b44-43db-867a-aff2070543a6\") " pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.185899 4823 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqfn4\" (UniqueName: \"kubernetes.io/projected/15aafcd2-7b44-43db-867a-aff2070543a6-kube-api-access-tqfn4\") pod \"redhat-marketplace-qtjfl\" (UID: \"15aafcd2-7b44-43db-867a-aff2070543a6\") " pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.287615 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqfn4\" (UniqueName: \"kubernetes.io/projected/15aafcd2-7b44-43db-867a-aff2070543a6-kube-api-access-tqfn4\") pod \"redhat-marketplace-qtjfl\" (UID: \"15aafcd2-7b44-43db-867a-aff2070543a6\") " pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.287742 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15aafcd2-7b44-43db-867a-aff2070543a6-catalog-content\") pod \"redhat-marketplace-qtjfl\" (UID: \"15aafcd2-7b44-43db-867a-aff2070543a6\") " pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.287765 4823 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15aafcd2-7b44-43db-867a-aff2070543a6-utilities\") pod \"redhat-marketplace-qtjfl\" (UID: \"15aafcd2-7b44-43db-867a-aff2070543a6\") " pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.288176 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15aafcd2-7b44-43db-867a-aff2070543a6-utilities\") pod \"redhat-marketplace-qtjfl\" (UID: \"15aafcd2-7b44-43db-867a-aff2070543a6\") " pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.288330 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15aafcd2-7b44-43db-867a-aff2070543a6-catalog-content\") pod \"redhat-marketplace-qtjfl\" (UID: \"15aafcd2-7b44-43db-867a-aff2070543a6\") " pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.310121 4823 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqfn4\" (UniqueName: \"kubernetes.io/projected/15aafcd2-7b44-43db-867a-aff2070543a6-kube-api-access-tqfn4\") pod \"redhat-marketplace-qtjfl\" (UID: \"15aafcd2-7b44-43db-867a-aff2070543a6\") " pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.396146 4823 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:54:51 crc kubenswrapper[4823]: I0126 17:54:51.852401 4823 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtjfl"] Jan 26 17:54:52 crc kubenswrapper[4823]: I0126 17:54:52.516284 4823 generic.go:334] "Generic (PLEG): container finished" podID="15aafcd2-7b44-43db-867a-aff2070543a6" containerID="762a8e522c73e03c1a8969285c271a5ece6d88b4d54d88849611b6526c63f7b0" exitCode=0 Jan 26 17:54:52 crc kubenswrapper[4823]: I0126 17:54:52.516397 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtjfl" event={"ID":"15aafcd2-7b44-43db-867a-aff2070543a6","Type":"ContainerDied","Data":"762a8e522c73e03c1a8969285c271a5ece6d88b4d54d88849611b6526c63f7b0"} Jan 26 17:54:52 crc kubenswrapper[4823]: I0126 17:54:52.516621 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtjfl" event={"ID":"15aafcd2-7b44-43db-867a-aff2070543a6","Type":"ContainerStarted","Data":"f22d3cca934bd7ace45ddd8ba7f6437718c0677a172a8a3578afd9ed66f56047"} Jan 26 17:54:53 crc kubenswrapper[4823]: I0126 17:54:53.529733 4823 generic.go:334] "Generic (PLEG): container finished" podID="15aafcd2-7b44-43db-867a-aff2070543a6" containerID="9766f990bb66d1e6bb6c8705c8b81a6af8dcbd58c960885489df74d006f45212" exitCode=0 Jan 26 17:54:53 crc kubenswrapper[4823]: I0126 17:54:53.529966 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtjfl" event={"ID":"15aafcd2-7b44-43db-867a-aff2070543a6","Type":"ContainerDied","Data":"9766f990bb66d1e6bb6c8705c8b81a6af8dcbd58c960885489df74d006f45212"} Jan 26 17:54:54 crc kubenswrapper[4823]: I0126 17:54:54.546117 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtjfl" event={"ID":"15aafcd2-7b44-43db-867a-aff2070543a6","Type":"ContainerStarted","Data":"1c3d4f50ccee989e4e945a269d7d34105793e3dead9c3fd912008b4fca2c4099"} Jan 26 17:54:54 crc kubenswrapper[4823]: I0126 17:54:54.570698 4823 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qtjfl" podStartSLOduration=2.14801383 podStartE2EDuration="3.570682847s" podCreationTimestamp="2026-01-26 17:54:51 +0000 UTC" firstStartedPulling="2026-01-26 17:54:52.517682072 +0000 UTC m=+11289.203145187" lastFinishedPulling="2026-01-26 17:54:53.940351099 +0000 UTC m=+11290.625814204" observedRunningTime="2026-01-26 17:54:54.569688851 +0000 UTC m=+11291.255151956" watchObservedRunningTime="2026-01-26 17:54:54.570682847 +0000 UTC m=+11291.256145952" Jan 26 17:55:01 crc kubenswrapper[4823]: I0126 17:55:01.396543 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:55:01 crc kubenswrapper[4823]: I0126 17:55:01.396909 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:55:01 crc kubenswrapper[4823]: I0126 17:55:01.451114 4823 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:55:01 crc kubenswrapper[4823]: I0126 17:55:01.661037 4823 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:55:01 crc kubenswrapper[4823]: I0126 17:55:01.712555 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtjfl"] Jan 26 17:55:03 crc kubenswrapper[4823]: I0126 17:55:03.634381 4823 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qtjfl" podUID="15aafcd2-7b44-43db-867a-aff2070543a6" containerName="registry-server" containerID="cri-o://1c3d4f50ccee989e4e945a269d7d34105793e3dead9c3fd912008b4fca2c4099" gracePeriod=2 Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.508834 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.509497 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.646535 4823 generic.go:334] "Generic (PLEG): container finished" podID="15aafcd2-7b44-43db-867a-aff2070543a6" containerID="1c3d4f50ccee989e4e945a269d7d34105793e3dead9c3fd912008b4fca2c4099" exitCode=0 Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.646596 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtjfl" event={"ID":"15aafcd2-7b44-43db-867a-aff2070543a6","Type":"ContainerDied","Data":"1c3d4f50ccee989e4e945a269d7d34105793e3dead9c3fd912008b4fca2c4099"} Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.646672 4823 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qtjfl" event={"ID":"15aafcd2-7b44-43db-867a-aff2070543a6","Type":"ContainerDied","Data":"f22d3cca934bd7ace45ddd8ba7f6437718c0677a172a8a3578afd9ed66f56047"} Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.646693 4823 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f22d3cca934bd7ace45ddd8ba7f6437718c0677a172a8a3578afd9ed66f56047" Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.649959 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.766983 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15aafcd2-7b44-43db-867a-aff2070543a6-catalog-content\") pod \"15aafcd2-7b44-43db-867a-aff2070543a6\" (UID: \"15aafcd2-7b44-43db-867a-aff2070543a6\") " Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.767053 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqfn4\" (UniqueName: \"kubernetes.io/projected/15aafcd2-7b44-43db-867a-aff2070543a6-kube-api-access-tqfn4\") pod \"15aafcd2-7b44-43db-867a-aff2070543a6\" (UID: \"15aafcd2-7b44-43db-867a-aff2070543a6\") " Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.767431 4823 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15aafcd2-7b44-43db-867a-aff2070543a6-utilities\") pod \"15aafcd2-7b44-43db-867a-aff2070543a6\" (UID: \"15aafcd2-7b44-43db-867a-aff2070543a6\") " Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.768824 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15aafcd2-7b44-43db-867a-aff2070543a6-utilities" (OuterVolumeSpecName: "utilities") pod "15aafcd2-7b44-43db-867a-aff2070543a6" (UID: "15aafcd2-7b44-43db-867a-aff2070543a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.774522 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15aafcd2-7b44-43db-867a-aff2070543a6-kube-api-access-tqfn4" (OuterVolumeSpecName: "kube-api-access-tqfn4") pod "15aafcd2-7b44-43db-867a-aff2070543a6" (UID: "15aafcd2-7b44-43db-867a-aff2070543a6"). InnerVolumeSpecName "kube-api-access-tqfn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.802578 4823 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15aafcd2-7b44-43db-867a-aff2070543a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15aafcd2-7b44-43db-867a-aff2070543a6" (UID: "15aafcd2-7b44-43db-867a-aff2070543a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.869329 4823 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15aafcd2-7b44-43db-867a-aff2070543a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.869435 4823 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15aafcd2-7b44-43db-867a-aff2070543a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:04 crc kubenswrapper[4823]: I0126 17:55:04.869452 4823 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqfn4\" (UniqueName: \"kubernetes.io/projected/15aafcd2-7b44-43db-867a-aff2070543a6-kube-api-access-tqfn4\") on node \"crc\" DevicePath \"\"" Jan 26 17:55:05 crc kubenswrapper[4823]: I0126 17:55:05.654498 4823 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qtjfl" Jan 26 17:55:05 crc kubenswrapper[4823]: I0126 17:55:05.682997 4823 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtjfl"] Jan 26 17:55:05 crc kubenswrapper[4823]: I0126 17:55:05.691804 4823 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qtjfl"] Jan 26 17:55:05 crc kubenswrapper[4823]: E0126 17:55:05.719584 4823 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15aafcd2_7b44_43db_867a_aff2070543a6.slice/crio-f22d3cca934bd7ace45ddd8ba7f6437718c0677a172a8a3578afd9ed66f56047\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15aafcd2_7b44_43db_867a_aff2070543a6.slice\": RecentStats: unable to find data in memory cache]" Jan 26 17:55:07 crc kubenswrapper[4823]: I0126 17:55:07.571533 4823 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15aafcd2-7b44-43db-867a-aff2070543a6" path="/var/lib/kubelet/pods/15aafcd2-7b44-43db-867a-aff2070543a6/volumes" Jan 26 17:55:34 crc kubenswrapper[4823]: I0126 17:55:34.512439 4823 patch_prober.go:28] interesting pod/machine-config-daemon-kv6z2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 26 17:55:34 crc kubenswrapper[4823]: I0126 17:55:34.513359 4823 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kv6z2" podUID="1a3a166e-bc51-4f3e-baf7-9a9d3cd4e85d" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"